pyemu.utils.gw_utils

MODFLOW support utilities

Module Contents

Classes

GsfReader

a helper class to read a standard modflow-usg gsf file

Functions

modflow_pval_to_template_file(pval_file[, tpl_file])

write a template file for a modflow parameter value file.

modflow_hob_to_instruction_file(hob_file[, ins_file])

write an instruction file for a modflow head observation file

modflow_hydmod_to_instruction_file(hydmod_file[, ins_file])

write an instruction file for a modflow hydmod file

modflow_read_hydmod_file(hydmod_file[, hydmod_outfile])

read a binary hydmod file and return a dataframe of the results

setup_mtlist_budget_obs(list_filename[, gw_filename, ...])

setup observations of gw (and optionally sw) mass budgets from mt3dusgs list file.

_write_mtlist_ins(ins_filename, df, prefix)

write an instruction file for a MT3D-USGS list file

apply_mtlist_budget_obs(list_filename[, gw_filename, ...])

process an MT3D-USGS list file to extract mass budget entries.

setup_mflist_budget_obs(list_filename[, flx_filename, ...])

setup observations of budget volume and flux from modflow list file.

apply_mflist_budget_obs(list_filename[, flx_filename, ...])

process a MODFLOW list file to extract flux and volume water budget

_write_mflist_ins(ins_filename, df, prefix)

write an instruction file for a MODFLOW list file

setup_hds_timeseries(bin_file, kij_dict[, prefix, ...])

a function to setup a forward process to extract time-series style values

apply_hds_timeseries([config_file, postprocess_inact])

process a modflow binary file using a previously written

_setup_postprocess_hds_timeseries(hds_file, df, ...[, ...])

Dirty function to setup post processing concentrations in inactive/dry cells

_apply_postprocess_hds_timeseries([config_file, cinact])

private function to post processing binary files

setup_hds_obs(hds_file[, kperk_pairs, skip, prefix, ...])

a function to setup using all values from a layer-stress period

last_kstp_from_kper(hds, kper)

function to find the last time step (kstp) for a

apply_hds_obs(hds_file[, inact_abs_val, precision, text])

process a modflow head save file. A companion function to

setup_sft_obs(sft_file[, ins_file, start_datetime, ...])

writes a post-processor and instruction file for a mt3d-usgs sft output file

apply_sft_obs()

process an mt3d-usgs sft ASCII output file using a previous-written

setup_sfr_seg_parameters(nam_file[, model_ws, ...])

Setup multiplier parameters for SFR segment data.

setup_sfr_reach_parameters(nam_file[, model_ws, par_cols])

Setup multiplier paramters for reach data, when reachinput option is specififed in sfr.

apply_sfr_seg_parameters([seg_pars, reach_pars])

apply the SFR segement multiplier parameters.

apply_sfr_parameters([seg_pars, reach_pars])

thin wrapper around gw_utils.apply_sfr_seg_parameters()

setup_sfr_obs(sfr_out_file[, seg_group_dict, ...])

setup observations using the sfr ASCII output file. Setups

apply_sfr_obs()

apply the sfr observation process

load_sfr_out(sfr_out_file[, selection])

load an ASCII SFR output file into a dictionary of kper: dataframes.

setup_sfr_reach_obs(sfr_out_file[, seg_reach, ...])

setup observations using the sfr ASCII output file. Setups

apply_sfr_reach_obs()

apply the sfr reach observation process.

modflow_sfr_gag_to_instruction_file(gage_output_file)

writes an instruction file for an SFR gage output file to read Flow only at all times

setup_gage_obs(gage_file[, ins_file, start_datetime, ...])

setup a forward run post processor routine for the modflow gage file

apply_gage_obs([return_obs_file])

apply the modflow gage obs post-processor

apply_hfb_pars([par_file])

a function to apply HFB multiplier parameters.

write_hfb_zone_multipliers_template(m)

write a template file for an hfb using multipliers per zone (double yuck!)

write_hfb_template(m)

write a template file for an hfb (yuck!)

Attributes

PP_FMT

PP_NAMES

pyemu.utils.gw_utils.PP_FMT
pyemu.utils.gw_utils.PP_NAMES = ['name', 'x', 'y', 'zone', 'parval1']
pyemu.utils.gw_utils.modflow_pval_to_template_file(pval_file, tpl_file=None)

write a template file for a modflow parameter value file.

Parameters:
  • pval_file (str) – the path and name of the existing modflow pval file

  • tpl_file (str, optional) – template file to write. If None, use pval_file +”.tpl”. Default is None

Note

Uses names in the first column in the pval file as par names.

Returns:

a dataFrame with control file parameter information

Return type:

pandas.DataFrame

pyemu.utils.gw_utils.modflow_hob_to_instruction_file(hob_file, ins_file=None)

write an instruction file for a modflow head observation file

Parameters:
  • hob_file (str) – the path and name of the existing modflow hob file

  • ins_file (str, optional) – the name of the instruction file to write. If None, hob_file +”.ins” is used. Default is None.

Returns:

a dataFrame with control file observation information

Return type:

pandas.DataFrame

pyemu.utils.gw_utils.modflow_hydmod_to_instruction_file(hydmod_file, ins_file=None)

write an instruction file for a modflow hydmod file

Parameters:
  • hydmod_file (str) – the path and name of the existing modflow hob file

  • ins_file (str, optional) – the name of the instruction file to write. If None, hydmod_file +”.ins” is used. Default is None.

Returns:

a dataFrame with control file observation information

Return type:

pandas.DataFrame

Note

calls pyemu.gw_utils.modflow_read_hydmod_file()

pyemu.utils.gw_utils.modflow_read_hydmod_file(hydmod_file, hydmod_outfile=None)

read a binary hydmod file and return a dataframe of the results

Parameters:
  • hydmod_file (str) – The path and name of the existing modflow hydmod binary file

  • hydmod_outfile (str, optional) – output file to write. If None, use hydmod_file +”.dat”. Default is None.

Returns:

a dataFrame with hymod_file values

Return type:

pandas.DataFrame

pyemu.utils.gw_utils.setup_mtlist_budget_obs(list_filename, gw_filename='mtlist_gw.dat', sw_filename='mtlist_sw.dat', start_datetime='1-1-1970', gw_prefix='gw', sw_prefix='sw', save_setup_file=False)

setup observations of gw (and optionally sw) mass budgets from mt3dusgs list file.

Parameters:
  • list_filename (str) – path and name of existing modflow list file

  • gw_filename (str, optional) – output filename that will contain the gw budget observations. Default is “mtlist_gw.dat”

  • sw_filename (str, optional) – output filename that will contain the sw budget observations. Default is “mtlist_sw.dat”

  • start_datetime (str, optional) – an str that can be parsed into a pandas.TimeStamp. used to give budget observations meaningful names. Default is “1-1-1970”.

  • gw_prefix (str, optional) – a prefix to add to the GW budget observations. Useful if processing more than one list file as part of the forward run process. Default is ‘gw’.

  • sw_prefix (str, optional) – a prefix to add to the SW budget observations. Useful if processing more than one list file as part of the forward run process. Default is ‘sw’.

  • save_setup_file (bool, optional) – a flag to save “_setup_”+ list_filename +”.csv” file that contains useful control file information. Default is False.

Returns:

tuple containing

  • str: the command to add to the forward run script

  • str: the names of the instruction files that were created

  • pandas.DataFrame: a dataframe with information for constructing a control file

Note

writes an instruction file and also a _setup_.csv to use when constructing a pest control file

The instruction files are named out_filename +”.ins”

It is recommended to use the default value for gw_filename or sw_filename.

This is the companion function of gw_utils.apply_mtlist_budget_obs().

pyemu.utils.gw_utils._write_mtlist_ins(ins_filename, df, prefix)

write an instruction file for a MT3D-USGS list file

pyemu.utils.gw_utils.apply_mtlist_budget_obs(list_filename, gw_filename='mtlist_gw.dat', sw_filename='mtlist_sw.dat', start_datetime='1-1-1970')

process an MT3D-USGS list file to extract mass budget entries.

Parameters:
  • list_filename (str) – the path and name of an existing MT3D-USGS list file

  • gw_filename (str, optional) – the name of the output file with gw mass budget information. Default is “mtlist_gw.dat”

  • sw_filename (str) – the name of the output file with sw mass budget information. Default is “mtlist_sw.dat”

  • start_datatime (str) – an str that can be cast to a pandas.TimeStamp. Used to give observations a meaningful name

Returns:

2-element tuple containing

  • pandas.DataFrame: the gw mass budget dataframe

  • pandas.DataFrame: (optional) the sw mass budget dataframe. If the SFT process is not active, this returned value is None.

Note

This is the companion function of gw_utils.setup_mtlist_budget_obs().

pyemu.utils.gw_utils.setup_mflist_budget_obs(list_filename, flx_filename='flux.dat', vol_filename='vol.dat', start_datetime="1-1'1970", prefix='', save_setup_file=False, specify_times=None)

setup observations of budget volume and flux from modflow list file.

Parameters:
  • list_filename (str) – path and name of the existing modflow list file

  • flx_filename (str, optional) – output filename that will contain the budget flux observations. Default is “flux.dat”

  • vol_filename (str, optional) – output filename that will contain the budget volume observations. Default is “vol.dat”

  • start_datetime (str, optional) – a string that can be parsed into a pandas.TimeStamp. This is used to give budget observations meaningful names. Default is “1-1-1970”.

  • prefix (str, optional) – a prefix to add to the water budget observations. Useful if processing more than one list file as part of the forward run process. Default is ‘’.

  • save_setup_file (bool) – a flag to save “_setup_”+ list_filename +”.csv” file that contains useful control file information

  • specify_times (np.ndarray-like, optional) – An array of times to extract from the budget dataframes returned by the flopy MfListBudget(list_filename).get_dataframe() method. This can be useful to ensure consistent observation times for PEST. Array needs to be alignable with index of dataframe return by flopy method, care should be take to ensure that this is the case. If passed will be written to “budget_times.config” file as strings to be read by the companion apply_mflist_budget_obs() method at run time.

Returns:

a dataframe with information for constructing a control file.

Return type:

pandas.DataFrame

Note

This method writes instruction files and also a _setup_.csv to use when constructing a pest control file. The instruction files are named <flux_file>.ins and <vol_file>.ins, respectively

It is recommended to use the default values for flux_file and vol_file.

This is the companion function of gw_utils.apply_mflist_budget_obs().

pyemu.utils.gw_utils.apply_mflist_budget_obs(list_filename, flx_filename='flux.dat', vol_filename='vol.dat', start_datetime='1-1-1970', times=None)
process a MODFLOW list file to extract flux and volume water budget

entries.

Parameters:
  • list_filename (str) – path and name of the existing modflow list file

  • flx_filename (str, optional) – output filename that will contain the budget flux observations. Default is “flux.dat”

  • vol_filename (str, optional) – output filename that will contain the budget volume observations. Default is “vol.dat”

  • start_datetime (str, optional) – a string that can be parsed into a pandas.TimeStamp. This is used to give budget observations meaningful names. Default is “1-1-1970”.

  • times – An array of times to extract from the budget dataframes returned by the flopy MfListBudget(list_filename).get_dataframe() method. This can be useful to ensure consistent observation times for PEST. If type str, will assume times=filename and attempt to read single vector (no header or index) from file, parsing datetime using pandas. Array needs to be alignable with index of dataframe return by flopy method, care should be take to ensure that this is the case. If setup with setup_mflist_budget_obs() specifying specify_times argument times should be set to “budget_times.config”.

pyemu.utils.gw_utils._write_mflist_ins(ins_filename, df, prefix)

write an instruction file for a MODFLOW list file

pyemu.utils.gw_utils.setup_hds_timeseries(bin_file, kij_dict, prefix=None, include_path=False, model=None, postprocess_inact=None, text=None, fill=None, precision='single')

a function to setup a forward process to extract time-series style values from a binary modflow binary file (or equivalent format - hds, ucn, sub, cbb, etc).

Parameters:
  • bin_file (str) – path and name of existing modflow binary file - headsave, cell budget and MT3D UCN supported.

  • kij_dict (dict) – dictionary of site_name: [k,i,j] pairs. For example: {“wel1”:[0,1,1]}.

  • prefix (str, optional) – string to prepend to site_name when forming observation names. Default is None

  • include_path (bool, optional) – flag to setup the binary file processing in directory where the hds_file is located (if different from where python is running). This is useful for setting up the process in separate directory for where python is running.

  • model (flopy.mbase, optional) – a flopy.basemodel instance. If passed, the observation names will have the datetime of the observation appended to them (using the flopy start_datetime attribute. If None, the observation names will have the zero-based stress period appended to them. Default is None.

  • postprocess_inact (float, optional) – Inactive value in heads/ucn file e.g. mt.btn.cinit. If None, no inactive value processing happens. Default is None.

  • text (str) – the text record entry in the binary file (e.g. “constant_head”). Used to indicate that the binary file is a MODFLOW cell-by-cell budget file. If None, headsave or MT3D unformatted concentration file is assummed. Default is None

  • fill (float) – fill value for NaNs in the extracted timeseries dataframe. If None, no filling is done, which may yield model run failures as the resulting processed timeseries CSV file (produced at runtime) may have missing values and can’t be processed with the cooresponding instruction file. Default is None.

  • precision (str) – the precision of the binary file. Can be “single” or “double”. Default is “single”.

Returns:

tuple containing

  • str: the forward run command to execute the binary file process during model runs.

  • pandas.DataFrame: a dataframe of observation information for use in the pest control file

Note

This function writes hds_timeseries.config that must be in the same dir where apply_hds_timeseries() is called during the forward run

Assumes model time units are days

This is the companion function of gw_utils.apply_hds_timeseries().

pyemu.utils.gw_utils.apply_hds_timeseries(config_file=None, postprocess_inact=None)

process a modflow binary file using a previously written configuration file

Parameters:
  • config_file (str, optional) – configuration file written by pyemu.gw_utils.setup_hds_timeseries. If None, looks for hds_timeseries.config

  • postprocess_inact (float, optional) – Inactive value in heads/ucn file e.g. mt.btn.cinit. If None, no inactive value processing happens. Default is None.

Note

This is the companion function of gw_utils.setup_hds_timeseries().

pyemu.utils.gw_utils._setup_postprocess_hds_timeseries(hds_file, df, config_file, prefix=None, model=None)

Dirty function to setup post processing concentrations in inactive/dry cells

pyemu.utils.gw_utils._apply_postprocess_hds_timeseries(config_file=None, cinact=1e+30)

private function to post processing binary files

pyemu.utils.gw_utils.setup_hds_obs(hds_file, kperk_pairs=None, skip=None, prefix='hds', text='head', precision='single', include_path=False)

a function to setup using all values from a layer-stress period pair for observations.

Parameters:
  • hds_file (str) – path and name of an existing MODFLOW head-save file. If the hds_file endswith ‘ucn’, then the file is treated as a UcnFile type.

  • kperk_pairs ([(int,int)]) – a list of len two tuples which are pairs of kper (zero-based stress period index) and k (zero-based layer index) to setup observations for. If None, then all layers and stress period records found in the file will be used. Caution: a shit-ton of observations may be produced!

  • skip (variable, optional) – a value or function used to determine which values to skip when setting up observations. If np.scalar(skip) is True, then values equal to skip will not be used. If skip can also be a np.ndarry with dimensions equal to the model. Observations are set up only for cells with Non-zero values in the array. If not np.ndarray or np.scalar(skip), then skip will be treated as a lambda function that returns np.NaN if the value should be skipped.

  • prefix (str) – the prefix to use for the observation names. default is “hds”.

  • text (str) – the text tag the flopy HeadFile instance. Default is “head”

  • precison (str) – the precision string for the flopy HeadFile instance. Default is “single”

  • include_path (bool, optional) – flag to setup the binary file processing in directory where the hds_file

  • located (is) – the process in separate directory for where python is running.

Returns:

tuple containing

  • str: the forward run script line needed to execute the headsave file observation operation

  • pandas.DataFrame: a dataframe of pest control file information

Note

Writes an instruction file and a _setup_ csv used construct a control file.

This is the companion function to gw_utils.apply_hds_obs().

pyemu.utils.gw_utils.last_kstp_from_kper(hds, kper)

function to find the last time step (kstp) for a give stress period (kper) in a modflow head save file.

Parameters:
  • hds (flopy.utils.HeadFile) – head save file

  • kper (int) – the zero-index stress period number

Returns:

the zero-based last time step during stress period kper in the head save file

Return type:

int

pyemu.utils.gw_utils.apply_hds_obs(hds_file, inact_abs_val=1e+20, precision='single', text='head')

process a modflow head save file. A companion function to gw_utils.setup_hds_obs() that is called during the forward run process

Parameters:
  • hds_file (str) – a modflow head save filename. if hds_file ends with ‘ucn’, then the file is treated as a UcnFile type.

  • inact_abs_val (float, optional) – the value that marks the mininum and maximum active value. values in the headsave file greater than inact_abs_val or less than -inact_abs_val are reset to inact_abs_val

Returns:

a dataframe with extracted simulated values.

Return type:

pandas.DataFrame

Note

This is the companion function to gw_utils.setup_hds_obs().

pyemu.utils.gw_utils.setup_sft_obs(sft_file, ins_file=None, start_datetime=None, times=None, ncomp=1)

writes a post-processor and instruction file for a mt3d-usgs sft output file

Parameters:
  • sft_file (str) – path and name of an existing sft output file (ASCII)

  • ins_file (str, optional) – the name of the instruction file to create. If None, the name is sft_file`+”.ins”. Default is `None.

  • start_datetime (str) – a pandas.to_datetime() compatible str. If not None, then the resulting observation names have the datetime suffix. If None, the suffix is the output totim. Default is None.

  • times ([float]) – a list of times to make observations for. If None, all times found in the file are used. Default is None.

  • ncomp (int) – number of components in transport model. Default is 1.

Returns:

a dataframe with observation names and values for the sft simulated concentrations.

Return type:

pandas.DataFrame

Note

This is the companion function to gw_utils.apply_sft_obs().

pyemu.utils.gw_utils.apply_sft_obs()

process an mt3d-usgs sft ASCII output file using a previous-written config file

Returns:

a dataframe of extracted simulated outputs

Return type:

pandas.DataFrame

Note

This is the companion function to gw_utils.setup_sft_obs().

pyemu.utils.gw_utils.setup_sfr_seg_parameters(nam_file, model_ws='.', par_cols=None, tie_hcond=True, include_temporal_pars=None)

Setup multiplier parameters for SFR segment data.

Parameters:
  • nam_file (str) – MODFLOw name file. DIS, BAS, and SFR must be available as pathed in the nam_file. Optionally, nam_file can be an existing flopy.modflow.Modflow.

  • model_ws (str) – model workspace for flopy to load the MODFLOW model from

  • par_cols ([str]) – a list of segment data entires to parameterize

  • tie_hcond (bool) – flag to use same mult par for hcond1 and hcond2 for a given segment. Default is True.

  • include_temporal_pars ([str]) – list of spatially-global multipliers to set up for each stress period. Default is None

Returns:

a dataframe with useful parameter setup information

Return type:

pandas.DataFrame

Note

This function handles the standard input case, not all the cryptic SFR options. Loads the dis, bas, and sfr files with flopy using model_ws.

This is the companion function to gw_utils.apply_sfr_seg_parameters() . The number (and numbering) of segment data entries must consistent across all stress periods.

Writes nam_file +”_backup_.sfr” as the backup of the original sfr file Skips values = 0.0 since multipliers don’t work for these

pyemu.utils.gw_utils.setup_sfr_reach_parameters(nam_file, model_ws='.', par_cols=['strhc1'])

Setup multiplier paramters for reach data, when reachinput option is specififed in sfr.

Parameters:
  • nam_file (str) – MODFLOw name file. DIS, BAS, and SFR must be available as pathed in the nam_file. Optionally, nam_file can be an existing flopy.modflow.Modflow.

  • model_ws (str) – model workspace for flopy to load the MODFLOW model from

  • par_cols ([str]) – a list of segment data entires to parameterize

  • tie_hcond (bool) – flag to use same mult par for hcond1 and hcond2 for a given segment. Default is True.

  • include_temporal_pars ([str]) – list of spatially-global multipliers to set up for each stress period. Default is None

Returns:

a dataframe with useful parameter setup information

Return type:

pandas.DataFrame

Note

Similar to gw_utils.setup_sfr_seg_parameters(), method will apply params to sfr reachdata

Can load the dis, bas, and sfr files with flopy using model_ws. Or can pass a model object (SFR loading can be slow)

This is the companion function of gw_utils.apply_sfr_reach_parameters() Skips values = 0.0 since multipliers don’t work for these

pyemu.utils.gw_utils.apply_sfr_seg_parameters(seg_pars=True, reach_pars=False)

apply the SFR segement multiplier parameters.

Parameters:
  • seg_pars (bool, optional) – flag to apply segment-based parameters. Default is True

  • reach_pars (bool, optional) – flag to apply reach-based parameters. Default is False

Returns:

the modified SFR package instance

Return type:

flopy.modflow.ModflowSfr

Note

Expects “sfr_seg_pars.config” to exist

Expects nam_file +”_backup_.sfr” to exist

pyemu.utils.gw_utils.apply_sfr_parameters(seg_pars=True, reach_pars=False)

thin wrapper around gw_utils.apply_sfr_seg_parameters()

Parameters:
  • seg_pars (bool, optional) – flag to apply segment-based parameters. Default is True

  • reach_pars (bool, optional) – flag to apply reach-based parameters. Default is False

Returns:

the modified SFR package instance

Return type:

flopy.modflow.ModflowSfr

Note

Expects “sfr_seg_pars.config” to exist

Expects nam_file +”_backup_.sfr” to exist

pyemu.utils.gw_utils.setup_sfr_obs(sfr_out_file, seg_group_dict=None, ins_file=None, model=None, include_path=False)

setup observations using the sfr ASCII output file. Setups the ability to aggregate flows for groups of segments. Applies only flow to aquier and flow out.

Parameters:
  • sft_out_file (str) – the name and path to an existing SFR output file

  • seg_group_dict (dict) – a dictionary of SFR segements to aggregate together for a single obs. the key value in the dict is the base observation name. If None, all segments are used as individual observations. Default is None

  • model (flopy.mbase) – a flopy model. If passed, the observation names will have the datetime of the observation appended to them. If None, the observation names will have the stress period appended to them. Default is None.

  • include_path (bool) – flag to prepend sfr_out_file path to sfr_obs.config. Useful for setting up process in separate directory for where python is running.

Returns:

dataframe of observation name, simulated value and group.

Return type:

pandas.DataFrame

Note

This is the companion function of gw_utils.apply_sfr_obs().

This function writes “sfr_obs.config” which must be kept in the dir where “gw_utils.apply_sfr_obs()” is being called during the forward run

pyemu.utils.gw_utils.apply_sfr_obs()

apply the sfr observation process

Parameters:

None

Returns:

a dataframe of aggregrated sfr segment aquifer and outflow

Return type:

pandas.DataFrame

Note

This is the companion function of gw_utils.setup_sfr_obs().

Requires sfr_obs.config.

Writes sfr_out_file`+”.processed”, where `sfr_out_file is defined in “sfr_obs.config”

pyemu.utils.gw_utils.load_sfr_out(sfr_out_file, selection=None)

load an ASCII SFR output file into a dictionary of kper: dataframes.

Parameters:
  • sfr_out_file (str) – SFR ASCII output file

  • selection (pandas.DataFrame) – a dataframe of reach and segment pairs to load. If None, all reach-segment pairs are loaded. Default is None.

Returns:

dictionary of {kper:pandas.DataFrame} of SFR output.

Return type:

dict

Note

Aggregates flow to aquifer for segments and returns and flow out at downstream end of segment.

pyemu.utils.gw_utils.setup_sfr_reach_obs(sfr_out_file, seg_reach=None, ins_file=None, model=None, include_path=False)

setup observations using the sfr ASCII output file. Setups sfr point observations using segment and reach numbers.

Parameters:
  • sft_out_file (str) – the path and name of an existing SFR output file

  • seg_reach (varies) – a dict, or list of SFR [segment,reach] pairs identifying locations of interest. If dict, the key value in the dict is the base observation name. If None, all reaches are used as individual observations. Default is None - THIS MAY SET UP A LOT OF OBS!

  • model (flopy.mbase) – a flopy model. If passed, the observation names will have the datetime of the observation appended to them. If None, the observation names will have the stress period appended to them. Default is None.

  • include_path (bool) – a flag to prepend sfr_out_file path to sfr_obs.config. Useful for setting up process in separate directory for where python is running.

Returns:

a dataframe of observation names, values, and groups

Return type:

pd.DataFrame

Note

This is the companion function of gw_utils.apply_sfr_reach_obs().

This function writes “sfr_reach_obs.config” which must be kept in the dir where “apply_sfr_reach_obs()” is being called during the forward run

pyemu.utils.gw_utils.apply_sfr_reach_obs()

apply the sfr reach observation process.

Returns:

a dataframe of sfr aquifer and outflow ad segment,reach locations

Return type:

pd.DataFrame

Note

This is the companion function of gw_utils.setup_sfr_reach_obs().

Requires sfr_reach_obs.config.

Writes <sfr_out_file>.processed, where <sfr_out_file> is defined in “sfr_reach_obs.config”

pyemu.utils.gw_utils.modflow_sfr_gag_to_instruction_file(gage_output_file, ins_file=None, parse_filename=False)

writes an instruction file for an SFR gage output file to read Flow only at all times

Parameters:
  • gage_output_file (str) – the gage output filename (ASCII).

  • ins_file (str, optional) – the name of the instruction file to create. If None, the name is gage_output_file +”.ins”. Default is None

  • parse_filename (bool) – if True, get the gage_num parameter by parsing the gage output file filename if False, get the gage number from the file itself

Returns:

tuple containing

  • pandas.DataFrame: a dataframe with obsnme and obsval for the sfr simulated flows.

  • str: file name of instructions file relating to gage output.

  • str: file name of processed gage output for all times

Note

Sets up observations for gage outputs only for the Flow column.

If parse_namefile is true, only text up to first ‘.’ is used as the gage_num

pyemu.utils.gw_utils.setup_gage_obs(gage_file, ins_file=None, start_datetime=None, times=None)

setup a forward run post processor routine for the modflow gage file

Parameters:
  • gage_file (str) – the gage output file (ASCII)

  • ins_file (str, optional) – the name of the instruction file to create. If None, the name is gage_file`+”.processed.ins”. Default is `None

  • start_datetime (str) – a pandas.to_datetime() compatible str. If not None, then the resulting observation names have the datetime suffix. If None, the suffix is the output totim. Default is None.

  • times ([float]) – a container of times to make observations for. If None, all times are used. Default is None.

Returns:

tuple containing

  • pandas.DataFrame: a dataframe with observation name and simulated values for the values in the gage file.

  • str: file name of instructions file that was created relating to gage output.

  • str: file name of processed gage output (processed according to times passed above.)

Note

Setups up observations for gage outputs (all columns).

This is the companion function of gw_utils.apply_gage_obs()

pyemu.utils.gw_utils.apply_gage_obs(return_obs_file=False)

apply the modflow gage obs post-processor

Parameters:

return_obs_file (bool) – flag to return the processed observation file. Default is False.

Note

This is the companion function of gw_utils.setup_gage_obs()

pyemu.utils.gw_utils.apply_hfb_pars(par_file='hfb6_pars.csv')

a function to apply HFB multiplier parameters.

Parameters:

par_file (str) – the HFB parameter info file. Default is hfb_pars.csv

Note

This is the companion function to gw_utils.write_hfb_zone_multipliers_template()

This is to account for the horrible HFB6 format that differs from other BCs making this a special case

Requires “hfb_pars.csv”

Should be added to the forward_run.py script

pyemu.utils.gw_utils.write_hfb_zone_multipliers_template(m)

write a template file for an hfb using multipliers per zone (double yuck!)

Parameters:

m (flopy.modflow.Modflow) – a model instance with an HFB package

Returns:

tuple containing

  • dict: a dictionary with original unique HFB conductivity values and their corresponding parameter names

  • str: the template filename that was created

pyemu.utils.gw_utils.write_hfb_template(m)

write a template file for an hfb (yuck!)

Parameters:

m – a model instance with an HFB package

class pyemu.utils.gw_utils.GsfReader(gsffilename)

a helper class to read a standard modflow-usg gsf file

Parameters:

gsffilename (str) – filename

get_vertex_coordinates()
Returns:

Dictionary containing list of x, y and z coordinates for each vertex

get_node_data()
Returns:

a pd.DataFrame containing Node information; Node, X, Y, Z, layer, numverts, vertidx

Return type:

nodedf

get_node_coordinates(zcoord=False, zero_based=False)
Parameters:
  • zcoord (bool) – flag to add z coord to coordinates. Default is False

  • zero_based (bool) – flag to subtract one from the node numbers in the returned node_coords dict. This is needed to support PstFrom. Default is False

Returns:

Dictionary containing x and y coordinates for each node

Return type:

node_coords