Type: | Package |
Title: | Download Geospatial Data Available from Several Federated Data Sources |
Version: | 4.3.0 |
Date: | 2025-04-12 |
Description: | Download geospatial data available from several federated data sources (mainly sources maintained by the US Federal government). Currently, the package enables extraction from nine datasets: The National Elevation Dataset digital elevation models (https://www.usgs.gov/3d-elevation-program 1 and 1/3 arc-second; USGS); The National Hydrography Dataset (https://www.usgs.gov/national-hydrography/national-hydrography-dataset; USGS); The Soil Survey Geographic (SSURGO) database from the National Cooperative Soil Survey (https://websoilsurvey.sc.egov.usda.gov/; NCSS), which is led by the Natural Resources Conservation Service (NRCS) under the USDA; the Global Historical Climatology Network (https://www.ncei.noaa.gov/products/land-based-station/global-historical-climatology-network-daily; GHCN), coordinated by National Climatic Data Center at NOAA; the Daymet gridded estimates of daily weather parameters for North America, version 4, available from the Oak Ridge National Laboratory's Distributed Active Archive Center (https://daymet.ornl.gov/; DAAC); the International Tree Ring Data Bank; the National Land Cover Database (https://www.mrlc.gov/; NLCD); the Cropland Data Layer from the National Agricultural Statistics Service (https://www.nass.usda.gov/Research_and_Science/Cropland/SARS1a.php; NASS); and the PAD-US dataset of protected area boundaries (https://www.usgs.gov/programs/gap-analysis-project/science/pad-us-data-overview; USGS). |
License: | MIT + file LICENSE |
URL: | https://docs.ropensci.org/FedData/, https://github.com/ropensci/FedData |
BugReports: | https://github.com/ropensci/FedData/issues |
SystemRequirements: | GDAL (>= 3.1.0) |
Depends: | R (≥ 4.1.0) |
Imports: | curl, httr, dplyr, tibble, tidyr, stringr, igraph, xml2, lifecycle, lubridate, progress, purrr, readr, terra (≥ 1.0), sf (≥ 1.0), ggplot2, glue, magrittr, jsonlite |
Encoding: | UTF-8 |
LazyData: | true |
NeedsCompilation: | no |
Repository: | CRAN |
RoxygenNote: | 7.3.2 |
Suggests: | arcgislayers (≥ 0.2.0), knitr, leaflet, mapview, ncdf4, rmapshaper, testthat, usethis |
Packaged: | 2025-04-12 20:38:45 UTC; kyle.bocinsky |
Author: | R. Kyle Bocinsky |
Maintainer: | R. Kyle Bocinsky <bocinsky@gmail.com> |
Date/Publication: | 2025-04-12 21:00:09 UTC |
Pipe operator
Description
See magrittr::%>%
for details.
Usage
lhs %>% rhs
Arguments
lhs |
A value or the magrittr placeholder. |
rhs |
A function call using the magrittr semantics. |
Value
The result of calling rhs(lhs)
.
Scaffolds the common pattern of selecting a layer and filter a geometry from an ArcGIS feature service.
Description
This function uses the arcgislayers package, which has has had compatibility issues for several commonly used platforms. It is mainly here for historical reasons.
Usage
agol_filter(url, layer_name = NULL, geom, simplify = TRUE)
Arguments
url |
the url of the remote resource. Must be of length one. |
layer_name |
the name(s) associated with the layer you want
to retrieve. Can be a character vector. If |
geom |
an object of class |
simplify |
when only one layer exists, just return the |
Value
An sf
object, or a data.frame
, or a list of these objects if
layer_name == NULL
or if length(layer_name) > 1
. Missing layers return
"NULL".
Examples
## Not run:
# Get a single layer
agol_filter(
url = "https://hydro.nationalmap.gov/arcgis/rest/services/NHDPlus_HR/MapServer/",
layer_name = "WBDHU12",
geom = FedData::meve
)
# Can be returned as a list
agol_filter(
url = "https://hydro.nationalmap.gov/arcgis/rest/services/NHDPlus_HR/MapServer/",
layer_name = "WBDHU12",
geom = FedData::meve,
simplify = FALSE
)
# Get a list with all layers
agol_filter(
url = "https://hydro.nationalmap.gov/arcgis/rest/services/NHDPlus_HR/MapServer/",
geom = FedData::meve
)
# Or include a vector of layer names
# Note that missing layers are returned as `NULL` values
agol_filter(
url = "https://hydro.nationalmap.gov/arcgis/rest/services/NHDPlus_HR/MapServer/",
layer_name = c(
"NHDPoint",
"NetworkNHDFlowline",
"NonNetworkNHDFlowline",
"NHDLine",
"NHDArea",
"NHDWaterbody"
),
geom = FedData::meve
)
## End(Not run)
Scaffolds the common pattern of selecting a layer and filter a geometry from an ArcGIS feature service.
Description
This function does not use the arcgislayers
package, which has has had compatibility
issues for several commonly used platforms.
Usage
agol_filter_httr(url, layer_name = NULL, geom, simplify = TRUE)
Arguments
url |
the url of the remote resource. Must be of length one. |
layer_name |
the name(s) associated with the layer you want
to retrieve. Can be a character vector. If |
geom |
an object of class |
simplify |
when only one layer exists, just return the |
Value
An sf
object, or a data.frame
, or a list of these objects if
layer_name == NULL
or if length(layer_name) > 1
. Missing layers return
"NULL".
Examples
## Not run:
# Get a single layer
agol_filter_httr(
url = "https://hydro.nationalmap.gov/arcgis/rest/services/NHDPlus_HR/MapServer/",
layer_name = "WBDHU12",
geom = FedData::meve
)
# Can be returned as a list
agol_filter_httr(
url = "https://hydro.nationalmap.gov/arcgis/rest/services/NHDPlus_HR/MapServer/",
layer_name = "WBDHU12",
geom = FedData::meve,
simplify = FALSE
)
# Get a list with all layers
agol_filter_httr(
url = "https://hydro.nationalmap.gov/arcgis/rest/services/NHDPlus_HR/MapServer/",
geom = FedData::meve
)
# Or include a vector of layer names
# Note that missing layers are returned as `NULL` values
agol_filter_httr(
url = "https://hydro.nationalmap.gov/arcgis/rest/services/NHDPlus_HR/MapServer/",
layer_name = c(
"NHDPoint",
"NetworkNHDFlowline",
"NonNetworkNHDFlowline",
"NHDLine",
"NHDArea",
"NHDWaterbody"
),
geom = FedData::meve
)
## End(Not run)
Check whether a web service is unavailable, and stop function if necessary.
Description
Check whether a web service is unavailable, and stop function if necessary.
Usage
check_service(x)
Arguments
x |
The path to the web service. |
Value
Error if service unavailable.
Use curl to download a file.
Description
This function makes it easy to implement timestamping and no-clobber of files.
Usage
download_data(
url,
destdir = getwd(),
timestamping = TRUE,
nc = FALSE,
verbose = FALSE,
progress = FALSE
)
Arguments
url |
The location of a file. |
destdir |
Where the file should be downloaded to. |
timestamping |
Should only newer files be downloaded? |
nc |
Should files of the same type not be clobbered? |
verbose |
Should cURL output be shown? |
progress |
Should a progress bar be shown with cURL output? |
Details
If both timestamping
and nc
are TRUE, nc behavior trumps timestamping.
Value
A character string of the file path to the downloaded file.
Download the 1-km DAYMET daily weather dataset for a region as a netcdf.
Description
Data are downloaded in the NetCDF format. download_daymet_thredds
returns the path to the downloaded NetCDF file.
Usage
download_daymet_thredds(bbox, element, year, region, tempo)
Arguments
bbox |
the bounding box in WGS84 coordinates as a comma-separated character vector "xmin,ymin,xmax,ymax" |
element |
An element to extract. |
year |
An integer year to extract. |
region |
The name of a region. The available regions are: |
tempo |
The frequency of the data. The available tempos are: |
Value
A named list of character vectors, each representing the full local paths of the tile downloads.
Download the daily data for a GHCN weather station.
Description
Download the daily data for a GHCN weather station.
Usage
download_ghcn_daily_station(ID, raw.dir, force.redo = FALSE)
Arguments
ID |
A character string giving the station ID. |
raw.dir |
A character string indicating where raw downloaded files should be put. |
force.redo |
If this weather station has been downloaded before, should it be updated? Defaults to FALSE. |
Value
A character string representing the full local path of the GHCN station data.
Download the latest version of the ITRDB.
Description
Downloads and parses the latest zipped (numbered) version of the ITRDB.
This function includes improvements to the read_crn
function from the
dplR library. The principle changes are better parsing of metadata, and support
for the Schweingruber-type Tucson format. Chronologies that are unable to be read
are reported to the user.
Usage
download_itrdb(
raw.dir = paste0(tempdir(), "/FedData/raw/itrdb"),
force.redo = FALSE
)
Arguments
raw.dir |
A character string indicating where raw downloaded files should be put. The directory will be created if missing. Defaults to './RAW/ITRDB/'. |
force.redo |
If a download already exists, should a new one be created? Defaults to FALSE. |
Value
A data frame containing all of the ITRDB data.
Download a zipped directory containing a shapefile of the SSURGO study areas.
Description
Download a zipped directory containing a shapefile of the SSURGO study areas.
Usage
download_ssurgo_inventory(raw.dir, ...)
Arguments
raw.dir |
A character string indicating where raw downloaded files should be put. |
Value
A character string representing the full local path of the SSURGO study areas zipped directory.
Download a zipped directory containing the spatial and tabular data for a SSURGO study area.
Description
download_ssurgo_study_area
first tries to download data including a state-specific Access
template, then the general US template.
Usage
download_ssurgo_study_area(area, date, raw.dir)
Arguments
area |
A character string indicating the SSURGO study area to be downloaded. |
date |
A character string indicating the date of the most recent update to the SSURGO
area for these data. This information may be gleaned from the SSURGO Inventory ( |
raw.dir |
A character string indicating where raw downloaded files should be put. |
Value
A character string representing the full local path of the SSURGO study areas zipped directory.
Extract data from a SSURGO database pertaining to a set of mapunits.
Description
extract_ssurgo_data
creates a directed graph of the joins in a SSURGO tabular dataset,
and then iterates through the tables, only retaining data pertinent to a set of mapunits.
Usage
extract_ssurgo_data(tables, mapunits)
Arguments
tables |
A list of SSURGO tabular data. |
mapunits |
A character vector of mapunits (likely dropped from SSURGO spatial data) defining which mapunits to retain. |
Value
A list of extracted SSURGO tabular data.
Download and crop the 1-km DAYMET v4 daily weather dataset.
Description
get_daymet
returns a SpatRaster
of weather data cropped to a given
template study area.
Usage
get_daymet(
template,
label,
elements = c("dayl", "prcp", "srad", "swe", "tmax", "tmin", "vp"),
years = 1980:(lubridate::year(Sys.time()) - 1),
region = "na",
tempo = "day",
extraction.dir = file.path(tempdir(), "FedData", "extractions", "daymet", label),
raster.options = c("COMPRESS=DEFLATE", "ZLEVEL=9", "INTERLEAVE=BAND"),
force.redo = FALSE,
progress = TRUE
)
Arguments
template |
An |
label |
A character string naming the study area. |
elements |
A character vector of elements to extract. |
years |
A numeric vector of years to extract. |
region |
The name of a region. The available regions are: |
tempo |
The frequency of the data. The available tempos are: |
extraction.dir |
A character string indicating where the extracted and cropped DEM should be put. Defaults to a temporary directory. |
raster.options |
a vector of GDAL options passed to terra::writeRaster. |
force.redo |
If an extraction for this template and label already exists in extraction.dir, should a new one be created? |
progress |
Draw a progress bar when downloading? |
Value
A named list of SpatRaster
s of weather data cropped to the extent of the template.
Examples
## Not run:
library(terra)
# Get the DAYMET (North America only)
# Returns a list of raster bricks
DAYMET <- get_daymet(
template = FedData::meve,
label = "meve",
elements = c("prcp", "tmin", "tmax"),
years = 1985
)
# Plot with terra::plot
plot(DAYMET$tmin$`1985-10-23`)
## End(Not run)
Download and crop the Global Historical Climate Network-Daily data.
Description
get_ghcn_daily
returns a named list of length 2:
'spatial': A
Simple Feature
of the locations of GHCN weather stations in the template, and'tabular': A named list of type
data.frame()
with the daily weather data for each station. The name of each list item is the station ID.
Usage
get_ghcn_daily(
template = NULL,
label = NULL,
elements = NULL,
years = NULL,
raw.dir = file.path(tempdir(), "FedData", "raw", "ghcn"),
extraction.dir = file.path(tempdir(), "FedData", "extractions", "ned", label),
standardize = FALSE,
force.redo = FALSE
)
Arguments
template |
An |
label |
A character string naming the study area. |
elements |
A character vector of elements to extract. ACMC = Average cloudiness midnight to midnight from 30-second
ceilometer data (percent) |
years |
A numeric vector indicating which years to get. |
raw.dir |
A character string indicating where raw downloaded files should be put. The directory will be created if missing. Defaults to './RAW/GHCN/'. |
extraction.dir |
A character string indicating where the extracted and cropped GHCN shapefiles should be put. The directory will be created if missing. Defaults to './EXTRACTIONS/GHCN/'. |
standardize |
Select only common year/month/day? Defaults to FALSE. |
force.redo |
If an extraction for this template and label already exists, should a new one be created? Defaults to FALSE. |
Value
A named list containing the 'spatial' and 'tabular' data.
Examples
## Not run:
# Get the daily GHCN data (GLOBAL)
# Returns a list: the first element is the spatial locations of stations,
# and the second is a list of the stations and their daily data
GHCN.prcp <-
get_ghcn_daily(
template = FedData::meve,
label = "meve",
elements = c("prcp")
)
# Plot the VEP polygon
plot(meve)
# Plot the spatial locations
plot(GHCN.prcp$spatial$geometry, pch = 1, add = TRUE)
legend("bottomleft", pch = 1, legend = "GHCN Precipitation Records")
# Elements for which you require the same data
# (i.e., minimum and maximum temperature for the same days)
# can be standardized using `standardize = TRUE`
GHCN.temp <- get_ghcn_daily(
template = FedData::meve,
label = "meve",
elements = c("tmin", "tmax"),
standardize = TRUE
)
# Plot the VEP polygon
plot(meve)
# Plot the spatial locations
plot(GHCN.temp$spatial$geometry, pch = 1, add = TRUE)
legend("bottomleft", pch = 1, legend = "GHCN Temperature Records")
## End(Not run)
Download and extract the daily data for a GHCN weather station.
Description
get_ghcn_daily_station
returns a named list of data.frames
, one for
each elements
. If elements
is undefined, it returns all available weather
tables for the station
Usage
get_ghcn_daily_station(
ID,
elements = NULL,
years = NULL,
raw.dir,
standardize = FALSE,
force.redo = FALSE
)
Arguments
ID |
A character string giving the station ID. |
elements |
A character vector of elements to extract. ACMC = Average cloudiness midnight to midnight from 30-second
ceilometer data (percent) |
years |
A numeric vector indicating which years to get. |
raw.dir |
A character string indicating where raw downloaded files should be put. |
standardize |
Select only common year/month/day? Defaults to FALSE. |
force.redo |
If this weather station has been downloaded before, should it be updated? Defaults to FALSE. |
Value
A named list of data.frames
, one for each elements
.
Download and crop the inventory of GHCN stations.
Description
get_ghcn_inventory
returns a SpatialPolygonsDataFrame
of the GHCN stations within
the specified template
. If template is not provided, returns the entire GHCN inventory.
Usage
get_ghcn_inventory(template = NULL, elements = NULL, raw.dir)
Arguments
template |
An |
elements |
A character vector of elements to extract. Common elements include 'tmin', 'tmax', and 'prcp'. |
raw.dir |
A character string indicating where raw downloaded files should be put. The directory will be created if missing. |
Details
Stations with multiple elements will have multiple points. This allows for easy mapping of stations by element availability.
Value
A Simple Feature
of the GHCN stations within
the specified template
Download the latest version of the ITRDB, and extract given parameters.
Description
get_itrdb
returns a named list of length 3:
'metadata': A data frame or
Simple Feature
(ifmakeSpatial==TRUE
) of the locations and names of extracted ITRDB chronologies,'widths': A matrix of tree-ring widths/densities given user selection, and
'depths': A matrix of tree-ring sample depths.
Usage
get_itrdb(
template = NULL,
label = NULL,
recon.years = NULL,
calib.years = NULL,
species = NULL,
measurement.type = NULL,
chronology.type = NULL,
raw.dir = paste0(tempdir(), "/FedData/raw/itrdb"),
extraction.dir = ifelse(!is.null(label), paste0(tempdir(),
"/FedData/extractions/itrdb/", label, "/"), paste0(tempdir(),
"/FedData/extractions/itrdb")),
force.redo = FALSE
)
Arguments
template |
An |
label |
A character string naming the study area. |
recon.years |
A numeric vector of years over which reconstructions are needed; if missing, the union of all years in the available chronologies are given. |
calib.years |
A numeric vector of all required years—chronologies without these years will be discarded; if missing, all available chronologies are given. |
species |
A character vector of 4-letter tree species identifiers; if missing, all available chronologies are given. |
measurement.type |
A character vector of measurement type identifiers. Options include:
if missing, all available chronologies are given. |
chronology.type |
A character vector of chronology type identifiers. Options include:
if missing, all available chronologies are given. |
raw.dir |
A character string indicating where raw downloaded files should be put. The directory will be created if missing. |
extraction.dir |
A character string indicating where the extracted and cropped ITRDB dataset should be put. The directory will be created if missing. |
force.redo |
If an extraction already exists, should a new one be created? Defaults to FALSE. |
Value
A named list containing the 'metadata', 'widths', and 'depths' data.
Examples
## Not run:
# Get the ITRDB records
ITRDB <- get_itrdb(
template = FedData::meve,
label = "meve"
)
# Plot the VEP polygon
plot(meve)
# Map the locations of the tree ring chronologies
plot(ITRDB$metadata$geometry, pch = 1, add = TRUE)
legend("bottomleft", pch = 1, legend = "ITRDB chronologies")
## End(Not run)
Download and crop the NASS Cropland Data Layer.
Description
get_nass_cdl
returns a SpatRaster
of NASS Cropland Data Layer cropped to a given
template study area.
Usage
get_nass_cdl(
template,
label,
year = 2019,
extraction.dir = paste0(tempdir(), "/FedData/"),
raster.options = c("COMPRESS=DEFLATE", "ZLEVEL=9", "INTERLEAVE=BAND"),
force.redo = FALSE,
progress = TRUE
)
get_nass(template, label, ...)
get_cdl(template, label, ...)
cdl_colors()
Arguments
template |
An |
label |
A character string naming the study area. |
year |
An integer representing the year of desired NASS Cropland Data Layer product. Acceptable values are 2007–the last year. |
extraction.dir |
A character string indicating where the extracted and cropped NASS data should be put. The directory will be created if missing. |
raster.options |
a vector of options for terra::writeRaster. |
force.redo |
If an extraction for this template and label already exists, should a new one be created? |
progress |
Draw a progress bar when downloading? |
... |
Other parameters passed on to get_nass_cdl. |
Value
A SpatRaster
cropped to the bounding box of the template.
Examples
## Not run:
# Extract data for the Mesa Verde National Park:
# Get the NASS CDL (USA ONLY)
# Returns a raster
NASS <-
get_nass_cdl(
template = FedData::meve,
label = "meve",
year = 2011
)
# Plot with terra::plot
terra::plot(NASS)
## End(Not run)
Download and crop the 1 (~30 meter) or 1/3 (~10 meter) arc-second National Elevation Dataset.
Description
get_ned
returns a SpatRaster
of elevation data cropped to a given
template study area.
Usage
get_ned(
template,
label,
res = "1",
extraction.dir = file.path(tempdir(), "FedData", "extractions", "ned", label),
raster.options = c("COMPRESS=DEFLATE", "ZLEVEL=9"),
force.redo = FALSE
)
Arguments
template |
An |
label |
A character string naming the study area. |
res |
A character string representing the desired resolution of the NED. '1' indicates the 1 arc-second NED (the default), while '13' indicates the 1/3 arc-second dataset. |
extraction.dir |
A character string indicating where the extracted and cropped DEM should be put. The directory will be created if missing. |
raster.options |
a vector of GDAL options passed to terra::writeRaster. |
force.redo |
If an extraction for this template and label already exists, should a new one be created? |
Value
A SpatRaster
DEM cropped to the extent of the template.
Examples
## Not run:
# Get the NED (USA ONLY)
# Returns a `SpatRaster`
NED <-
get_ned(
template = FedData::meve,
label = "meve"
)
# Plot with terra::plot
terra::plot(NED)
## End(Not run)
Load and crop tile from the 1 (~30 meter) or 1/3 (~10 meter) arc-second National Elevation Dataset.
Description
get_ned_tile
returns aSpatRaster
cropped within the specified template
.
If template is not provided, returns the entire NED tile.
Usage
get_ned_tile(template = NULL, res = "1", tileNorthing, tileWesting)
Arguments
template |
An |
res |
A character string representing the desired resolution of the NED. '1' indicates the 1 arc-second NED (the default), while '13' indicates the 1/3 arc-second dataset. |
tileNorthing |
An integer representing the northing (latitude, in degrees north of the equator) of the northwest corner of the tile to be downloaded. |
tileWesting |
An integer representing the westing (longitude, in degrees west of the prime meridian) of the northwest corner of the tile to be downloaded. |
Value
A SpatRaster
cropped to the extent of the template.
Download and crop the National Hydrography Dataset.
Description
get_nhd
returns a list of Simple Feature
objects extracted
from the National Hydrography Dataset.
Usage
get_nhd(
template,
label,
nhdplus = FALSE,
extraction.dir = file.path(tempdir(), "FedData", "extractions", "nhd", label),
force.redo = FALSE
)
Arguments
template |
An |
label |
A character string naming the study area. |
nhdplus |
Extract data from the USGS NHDPlus High Resolution service (experimental) |
extraction.dir |
A character string indicating where the extracted and cropped NHD data should be put. |
force.redo |
If an extraction for this template and label already exists, should a new one be created? |
Value
A list of sf
collections extracted from the National Hydrography Dataset.
Examples
## Not run:
# Get the NHD (USA ONLY)
NHD <- get_nhd(
template = FedData::meve,
label = "meve"
)
NHD
NHD %>%
plot_nhd(template = FedData::meve)
## End(Not run)
Download and crop the National Land Cover Database.
Description
get_nlcd
returns a SpatRaster
of NLCD data cropped to a given
template study area. nlcd_colors
and pal_nlcd
return the NLCD
legend and color palette, as available through the
MLRC website.
Usage
get_nlcd(
template,
label,
year = 2021,
dataset = "landcover",
landmass = "L48",
extraction.dir = file.path(tempdir(), "FedData", "extractions", "nlcd", label),
raster.options = c("COMPRESS=DEFLATE", "ZLEVEL=9"),
force.redo = FALSE
)
nlcd_colors()
pal_nlcd()
Arguments
template |
An |
label |
A character string naming the study area. |
year |
An integer representing the year of desired NLCD product. Acceptable values are 2019 (default), 2016, 2011, 2008, 2006, 2004, and 2001. The L48 data set for 2021 is corrupted on the NLCD Mapserver, and is thus not available through FedData. |
dataset |
A character string representing type of the NLCD product. Acceptable values are 'landcover' (default), 'impervious', and 'canopy'. |
landmass |
A character string representing the landmass to be extracted Acceptable values are 'L48' (lower 48 US states, the default), 'AK' (Alaska, 2001, 2011 and 2016 only), 'HI' (Hawaii, 2001 only), and 'PR' (Puerto Rico, 2001 only). |
extraction.dir |
A character string indicating where the extracted and cropped NLCD data should be put. The directory will be created if missing. |
raster.options |
a vector of GDAL options passed to terra::writeRaster. |
force.redo |
If an extraction for this template and label already exists, should a new one be created? |
Value
A RasterLayer
cropped to the bounding box of the template.
Examples
## Not run:
# Extract data for the Mesa Verde National Park:
# Get the NLCD (USA ONLY)
# Returns a raster
NLCD <-
get_nlcd(
template = FedData::meve,
label = "meve",
year = 2016
)
# Plot with terra::plot
terra::plot(NLCD)
## End(Not run)
Download and crop the Annual National Land Cover Database.
Description
get_nlcd_annual
returns a SpatRaster
of NLCD data cropped to a given
template study area. The Annual NLCD is currently only available for the conterminous United States.
More information about the Annual NLCD product is available on the
Annual NLCD web page.
Usage
get_nlcd_annual(
template,
label,
year = 2023,
product = "LndCov",
region = "CU",
collection = 1,
version = 0,
extraction.dir = file.path(tempdir(), "FedData", "extractions", "nlcd_annual", label),
raster.options = c("COMPRESS=DEFLATE", "ZLEVEL=9"),
force.redo = FALSE
)
Arguments
template |
An |
label |
A character string naming the study area. |
year |
An integer vector representing the year of desired NLCD product. Acceptable values are currently 1985 through 2023 (defaults to 2023). |
product |
A character vector representing type of the NLCD product.
Defaults to 'LndCov' (Land Cover). |
region |
A character string representing the region to be extracted Acceptable values are 'CU' (Conterminous US, the default), 'AK' (Alaska), and 'HI' (Hawaii). Currently, only 'CU' is available. |
collection |
An integer representing the collection number. Currently, only '1' is available. |
version |
An integer representing the version number. Currently, only '0' is available. |
extraction.dir |
A character string indicating where the extracted and cropped NLCD data should be put. The directory will be created if missing. |
raster.options |
a vector of GDAL options passed to terra::writeRaster. |
force.redo |
If an extraction for this template and label already exists, should a new one be created? |
Value
A RasterLayer
cropped to the bounding box of the template.
Examples
## Not run:
# Extract data for the Mesa Verde National Park:
# Get the NLCD (USA ONLY)
# Returns a raster
NLCD_ANNUAL <-
get_nlcd_annual(
template = FedData::meve,
label = "meve",
year = 2020,
product =
c(
"LndCov",
"LndChg",
"LndCnf",
"FctImp",
"ImpDsc",
"SpcChg"
)
)
NLCD_ANNUAL
## End(Not run)
Download and crop the PAD-US Dataset.
Description
get_padus
returns a list of sf
objects extracted
from the PAD-US Dataset. Data are retrieved directly from
PAD-US ArcGIS Web Services.
Usage
get_padus(
template,
label,
layer = c("Manager_Name"),
extraction.dir = file.path(tempdir(), "FedData", "extractions", "padus", label),
force.redo = FALSE
)
Arguments
template |
An |
label |
A character string naming the study area. |
layer |
A character vector containing one or more PAD-US Layers. By default, the Manager_Name layer is downloaded.
|
extraction.dir |
A character string indicating where the extracted and cropped PAD-US data should be put. |
force.redo |
If an extraction for this template and label already exists, should a new one be created? |
Details
PAD-US is America’s official national inventory of U.S. terrestrial and marine protected areas that are dedicated to the preservation of biological diversity and to other natural, recreation and cultural uses, managed for these purposes through legal or other effective means. PAD-US also includes the best available aggregation of federal land and marine areas provided directly by managing agencies, coordinated through the Federal Geographic Data Committee Federal Lands Working Group.
Value
A list of sf::sf collections extracted from the PAD-US Dataset.
Examples
## Not run:
# Get the PAD-US (USA ONLY)
PADUS <- get_padus(
template = FedData::meve,
label = "meve"
)
PADUS
## End(Not run)
Download and crop data from the NRCS SSURGO soils database.
Description
This is an efficient method for spatially merging several different soil survey areas as well as merging their tabular data.
Usage
get_ssurgo(
template,
label,
raw.dir = paste0(tempdir(), "/FedData/raw/ssurgo"),
extraction.dir = paste0(tempdir(), "/FedData/"),
force.redo = FALSE
)
Arguments
template |
An |
label |
A character string naming the study area. |
raw.dir |
A character string indicating where raw downloaded files should be put. The directory will be created if missing. Defaults to './RAW/SSURGO/'. |
extraction.dir |
A character string indicating where the extracted and cropped SSURGO shapefiles should be put. The directory will be created if missing. Defaults to './EXTRACTIONS/SSURGO/'. |
force.redo |
If an extraction for this template and label already exists, should a new one be created? Defaults to FALSE. |
Details
get_ssurgo
returns a named list of length 2:
'spatial': A
Simple Feature
of soil mapunits in the template, and'tabular': A named list of
data.frames
with the SSURGO tabular data.
Value
A named list containing the 'spatial' and 'tabular' data.
Examples
## Not run:
# Get the NRCS SSURGO data (USA ONLY)
SSURGO.MEVE <-
get_ssurgo(
template = FedData::meve,
label = "meve"
)
# Plot the VEP polygon
plot(meve)
# Plot the SSURGO mapunit polygons
plot(SSURGO.MEVE$spatial["MUKEY"],
lwd = 0.1,
add = TRUE
)
# Or, download by Soil Survey Area names
SSURGO.areas <-
get_ssurgo(
template = c("CO670", "CO075"),
label = "CO_TEST"
)
# Let's just look at spatial data for CO675
SSURGO.areas.CO675 <-
SSURGO.areas$spatial[SSURGO.areas$spatial$AREASYMBOL == "CO075", ]
# And get the NED data under them for pretty plotting
NED.CO675 <-
get_ned(
template = SSURGO.areas.CO675,
label = "SSURGO_CO675"
)
# Plot the SSURGO mapunit polygons, but only for CO675
terra::plot(NED.CO675)
plot(
SSURGO.areas.CO675$geom,
lwd = 0.1,
add = TRUE
)
## End(Not run)
Download and crop a shapefile of the SSURGO study areas.
Description
get_ssurgo_inventory
returns a SpatialPolygonsDataFrame
of the SSURGO study areas within
the specified template
. If template is not provided, returns the entire SSURGO inventory of study areas.
Usage
get_ssurgo_inventory(template = NULL, raw.dir)
Arguments
template |
An |
raw.dir |
A character string indicating where raw downloaded files should be put. The directory will be created if missing. |
Value
A SpatialPolygonsDataFrame
of the SSURGO study areas within
the specified template
.
Download and crop the spatial and tabular data for a SSURGO study area.
Description
get_ssurgo_study_area
returns a named list of length 2:
'spatial': A
Simple Feature
of soil mapunits in the template, and'tabular': A named list of
data.frames
with the SSURGO tabular data.
Usage
get_ssurgo_study_area(template = NULL, area, date, raw.dir)
Arguments
template |
An |
area |
A character string indicating the SSURGO study area to be downloaded. |
date |
A character string indicating the date of the most recent update to the SSURGO
area for these data. This information may be gleaned from the SSURGO Inventory ( |
raw.dir |
A character string indicating where raw downloaded files should be put. The directory will be created if missing. |
Value
A SpatialPolygonsDataFrame
of the SSURGO study areas within
the specified template
.
Download and crop the Watershed Boundary Dataset.
Description
get_wbd
returns an Simple Feature
collection of the HUC 12 regions within
the specified template
.
Usage
get_wbd(
template,
label,
extraction.dir = file.path(tempdir(), "FedData", "extractions", "nhd", label),
force.redo = FALSE
)
Arguments
template |
An |
label |
A character string naming the study area. |
extraction.dir |
A character string indicating where the extracted and cropped NHD data should be put. |
force.redo |
If an extraction for this template and label already exists, should a new one be created? |
Value
An sf
collection of the HUC 12 regions within
the specified template
.
The boundary of Mesa Verde National Park
Description
A dataset containing the spatial polygon defining the boundary of Mesa Verde National Park in Montana.
Usage
meve
Format
Simple feature collection with 1 feature and a geometry field.
A basic plotting function for NHD data.
Description
This is more of an example than anything
Usage
plot_nhd(x, template = NULL)
Arguments
x |
The result of get_nhd. |
template |
An |
Value
A ggplot2
panel of plots
Examples
## Not run:
# Get the NHD (USA ONLY)
NHD <- get_nhd(
template = FedData::meve,
label = "meve"
)
NHD
NHD %>%
plot_nhd(template = FedData::meve)
## End(Not run)
Turn an extent object into a polygon
Description
Turn an extent object into a polygon
Usage
polygon_from_extent(x, proj4string = NULL)
Arguments
x |
An object from which an bounding box object can be retrieved. |
proj4string |
A PROJ.4 formatted string defining the required projection. |
Value
A Simple Feature
object.
Read a Tucson-format chronology file.
Description
This function includes improvements to the read.crn
function from the
dplR library. The principle changes are better parsing of metadata, and support
for the Schweingruber-type Tucson format. Chronologies that are unable to be read
are reported to the user. This function automatically recognizes Schweingruber-type files.
Usage
read_crn(file)
Arguments
file |
A character string path pointing to a |
Details
This wraps two other functions: read_crn_metadata
read_crn_data
.
Value
A list containing the metadata and chronology.
Read chronology data from a Tucson-format chronology file.
Description
This function includes improvements to the read_crn
function from the
dplR library. The principle changes are better parsing of metadata, and support
for the Schweingruber-type Tucson format. Chronologies that are unable to be read
are reported to the user. The user (or read_crn
) must tell the function whether
the file is a Schweingruber-type chronology.
Usage
read_crn_data(file, SCHWEINGRUBER)
Arguments
file |
A character string path pointing to a |
SCHWEINGRUBER |
Is the file in the Schweingruber-type Tucson format? |
Value
A data.frame containing the data, or if SCHWEINGRUBER==T
, a list containing four types of data.
Read metadata from a Tucson-format chronology file.
Description
This function includes improvements to the read_crn
function from the
dplR library. The principle changes are better parsing of metadata, and support
for the Schweingruber-type Tucson format. Chronologies that are unable to be read
are reported to the user. The user (or read_crn
) must tell the function whether
the file is a Schweingruber-type chronology.
Usage
read_crn_metadata(file, SCHWEINGRUBER)
Arguments
file |
A character string path pointing to a |
SCHWEINGRUBER |
Is the file in the Schweingruber-type Tucson format? |
Details
Location information is converted to decimal degrees.
Value
A data.frame containing the metadata.
Replace NULLs
Description
Replace all the empty values in a list
Usage
replace_null(x)
Arguments
x |
A list |
Value
A list with NULLs replaced by NA
Examples
list(a = NULL, b = 1, c = list(foo = NULL, bar = NULL)) %>% replace_null()
Get a logical vector of which elements in a vector are sequentially duplicated.
Description
Get a logical vector of which elements in a vector are sequentially duplicated.
Usage
sequential_duplicated(x, rows = FALSE)
Arguments
x |
An vector of any type, or, if |
rows |
Is x a matrix? |
Value
A logical vector of the same length as x.
Submit a Soil Data Access (SDA) Query
Description
soils_query
submit an SQL query to retrieve data from the Soil Data Mart.
Please see https://sdmdataaccess.sc.egov.usda.gov/Query.aspx for guidelines
Usage
soils_query(q)
Arguments
q |
A character string representing a SQL query to the SDA service |
Value
A tibble returned from the SDA service
Splits a bbox into a list of bboxes less than a certain size
Description
Splits a bbox into a list of bboxes less than a certain size
Usage
split_bbox(bbox, x, y = x)
Arguments
x |
The maximum x size of the resulting bounding boxes |
y |
The maximum y size of the resulting bounding boxes; defaults to x |
Value
A list of bbox objects
Convert a list of station data to a single data frame.
Description
station_to_data_frame
returns a data.frame
of the GHCN station data list.
Usage
station_to_data_frame(station.data)
Arguments
station.data |
A named list containing station data |
Details
This function unwraps the station data and merges all data into a single data frame,
with the first column being in the Date
class.
Value
A data.frame
of the containing the unwrapped station data
Get the rightmost 'n' characters of a character string.
Description
Get the rightmost 'n' characters of a character string.
Usage
substr_right(x, n)
Arguments
x |
A character string. |
n |
The number of characters to retrieve. |
Value
A character string.
Unwraps a matrix and only keep the first n elements.
Description
A function that unwraps a matrix and only keeps the first n elements n can be either a constant (in which case it will be repeated), or a vector
Usage
unwrap_rows(mat, n)
Arguments
mat |
A matrix |
n |
A numeric vector |
Value
A logical vector of the same length as x
Strip query parameters from a URL
Description
Strip query parameters from a URL
Usage
url_base(x)
Arguments
x |
The URL to be modified |
Value
The URL without parameters