CRAN Task View: Web Technologies and Services (original) (raw)
Maintainer: | Mauricio Vargas Sepulveda, Will Beasley |
---|---|
Contact: | m.sepulveda at mail.utoronto.ca |
Version: | 2024-10-27 |
URL: | https://CRAN.R-project.org/view=WebTechnologies |
Source: | https://github.com/cran-task-views/WebTechnologies/ |
Contributions: | Suggestions and improvements for this task view are very welcome and can be made through issues or pull requests on GitHub or via e-mail to the maintainer address. For further details see the Contributing guide. |
Citation: | Mauricio Vargas Sepulveda, Will Beasley (2024). CRAN Task View: Web Technologies and Services. Version 2024-10-27. URL https://CRAN.R-project.org/view=WebTechnologies. |
Installation: | The packages from this task view can be installed automatically using the ctv package. For example, ctv::install.views("WebTechnologies", coreOnly = TRUE) installs all the core packages or ctv::update.views("WebTechnologies") installs all packages that are not yet installed and up-to-date. See the CRAN Task View Initiative for more details. |
0. Introduction
Tools for Working with the Web
This task view recommends packages and strategies for efficiently interacting with resources over the internet with R. This task view focuses on:
- Direct data download and ingestion,
- Online services,
- Frameworks for building web-based R applications,
- Low-level operations, and
- Resources
If you have suggestions for improving or growing this task view, please submit an issue or a pull request in the GitHub repository linked above. If you can’t contribute on GitHub, please e-mail the task view maintainer. If you have an issue with a package discussed below, please contact the package’s maintainer.
Thanks to all contributors to this task view, especially to Scott Chamberlain, Thomas Leeper, Patrick Mair, Karthik Ram, and Christopher Gandrud who maintained this task view up to 2021.
Core Tools For HTTP Requests
The bulk of R’s capabilities are supplied by CRAN packages that are layered on top of libcurl. A handful of packages provide the foundation for most modern approaches.
- httr2 and its predecessor httr are user-facing clients for HTTP requests. They leverage the curl package for most operations. If you are developing a package that calls a web service, we recommend reading their vignettes.
- crul is another package that leverages curl. It is an R6-based client that supports asynchronous HTTP requests, a pagination helper, HTTP mocking via webmockr, and request caching for unit tests via vcr. crul is intended to be called by other packages, instead of R users. Unlike httr2, crul’s current version does not support OAuth. Additional options may be passed to curl when instantiating crul’s R6 classes.
- curl is the lower-level package that provides a close interface between R and the libcurl C library. It is not intended to be called directly by typical R users. curl may be useful for operations on web-based XML or with FTP (as crul and httr2 are focused primarily on HTTP).
- utils and base are the base R packages that provide
download.file()
,url()
, and related functions. These functions also use libcurl.
Before you Start Using Web Scraping Tools
You may have a code to perform web scraping, and it can be very efficient by time metrics or resources usage, but first we need to talk about whether it’s legal and ethical for you to do so.
You can use the ‘polite’ package, which builds upoen the principles of seeking permission, taking slowly and never asking twice. The package builds on awesome toolkits for defining and managing http sessions (‘httr’ and ‘rvest’, declaring the user agent string and investigating site policies (‘robots.txt’), and utilizing rate-limiting and response caching (‘ratelimitr’ and ‘memoise’).
The problem is not technical, but ethical and also legal. You can technically log into an art auction site and scrape the prices of all the paintings, but if you need an account and to use ‘rSelenium’ to extract the information by automating clicks in the browser, you are subject to the Terms of Service (ToS).
Another problem is that some websites require specific connections. You can connect to a site from a university or government building and access content for free, but if you connect from home, you may find that you require a paid subscription to access the same content. If you scrape a site from a university, you might be breaking some laws if you are not carefull about the goal and scope of the scraping.
1. Direct data download and ingestion
In recent years, many functions have been updated to accommodate web pages that are protected with TLS/SSL. Consequently you can usually download a file’s if its url starts with “http” or “https”.
If the data file is not accessible via a simple url, you probably want to skip to the Online services section. It describes how to work with specific web services such as AWS, Google Documents, Twitter, REDCap, PubMed, and Wikipedia.
If the information is served by a database engine, please review the cloud services in the Online services section below, as well as the Databases with R CRAN Task View.
Ingest a remote file directly
Many base and CRAN packages provide functions that accept a url and return a data.frame
or list
.
- For tabular/rectangular plain-text structures:
- utils’s
read.csv()
,read.table()
, and friends return abase::data.frame
. - readr’s
read_csv()
,read_delim()
and friends return atibble::tibble
, which derives frombase::data.frame
. - data.table’s
fread()
returns adata.table::data.table
, which derives frombase::data.frame
. - arrow’s
read_csv_arrow()
returns atibble::tibble()
or other Arrow structures.
- utils’s
- For hierarchical/nested plain-text structures:
- For structures in the Spark ecosystem:
- arrow: interacts with a variety of file types used with big data including parquet, feather, and arrow IPC streams.
- For other file structures:
Download a remote file, then ingest it
If you need to process a different type of file, you can accomplish this in two steps. First download the file from a server to your local computer; second pass the path of the new local file to a function in a package like haven or foreign.
Many base and CRAN packages provide functions that download files:
- utils:
download.file()
. - curl:
curl_download()
,curl_fetch_multi()
, and friends. - httr2:
req_perform(path = <your_file_path>)
, or alternativelyreq_perform()
piped toresp_body_string()
- httr:
GET()
- RCurl:
getURL()
Parsing Structured Web Data
The vast majority of web-based data is structured as plain text, HTML, XML, or JSON. Web service APIs increasingly rely on JSON, but XML is still prevalent in many applications. There are several packages for specifically working with these format. These functions can be used to interact directly with insecure web pages or can be used to parse locally stored or in-memory web files. Colloquially, these activities are called web scraping.
- XML: There are two foundational packages for working with XML: XML and xml2. Both support general XML (and HTML) parsing, including XPath queries. xml2 is less fully featured, but more user friendly with respect to memory management, classes (e.g., XML node vs. node set vs. document), and namespaces. Of the two, only the XML supports de novo creation of XML nodes and documents.
Other XML tools include:- XML2R is a collection of convenient functions for coercing XML into data frames. An alternative to XML is selectr, which parses CSS3 Selectors and translates them to XPath 1.0 expressions. XML is often used for parsing xml and html, but selectr translates CSS selectors to XPath, so can use the CSS selectors instead of XPath.
- XMLSchema provides facilities in R for reading XML schema documents and processing them to create definitions for R classes and functions for converting XML nodes to instances of those classes. It provides the framework for meta-computing with XML schema in R.
- xslt is an extension for xml2 to transform XML documents by applying an xslt style-sheet. This may be useful for web scraping, as well as transforming XML markup into another human- or machine-readable format (e.g., HTML, JSON, plain text, etc.).
- HTML: All of the tools that work with XML also work for HTML, though HTML tends to be more prone to be malformed. So
xml2::read_html()
is a good first function to use for importing HTML. Other tools are designed specifically to work with HTML.- For capturing static content of web pages postlightmercury is a client for the web service ‘Mercury’ that turns web pages into structured and clean text.
- rvest is another higher-level alternative which expresses common web scraping tasks with pipes (like Base R’s
|>
and magrittr’s%>%
). - boilerpipeR provides generic extraction of main text content from HTML files; removal of ads, sidebars and headers using the boilerpipe Java library.
- PhantomJS (which was archived in 2018): webshot uses PhantomJS to provide screenshots of web pages without a browser. It can be useful for testing websites (such as Shiny applications).
r github("cpsievert/rdom")
uses PhantomJS to access a webpage’s Document Object Model (DOM). - htmltools provides functions to create HTML elements.
- RHTMLForms reads HTML documents and obtains a description of each of the forms it contains, along with the different elements and hidden fields. htm2txt uses regex to converts html documents to plain text by removing all html tags. Rcrawler does crawling and scraping of web pages.
- HTML Utilities: These tools don’t extract content, but they can help your develop and debug.
* W3CMarkupValidator provides an R Interface to W3C Markup Validation Services for validating HTML documents.
* The selectorgadget browser extension can be used to identify page elements.
- JSON: There are several packages for reading and writing JSON: rjson, RJSONIO, and jsonlite. We recommend using jsonlite. Check out the paper describing jsonlite by Jeroen Ooms https://arxiv.org/abs/1403.2805. jqr provides bindings for the fast JSON library ‘jq’. jsonvalidate validates JSON against a schema using the “is-my-json-valid” JavaScript library; ajv does the same using the ‘ajv’ JavaScript library. ndjson supports the “ndjson” format.
- RSS/Atom: feedeR can be used to parse RSS or Atom feeds. tidyRSS parses RSS, Atom XML/JSON and geoRSS into a tidy data.frame.
- swagger can be used to automatically generate functions for working with an web service API that provides documentation in Swagger.io format.
2. Online services
Cloud Computing and Storage
- Amazon Web Services (AWS):
- paws is an interface to nearly all AWS APIs, including compute, storage, databases, and machine learning. It also requires no external system dependencies.
- aws.signature provides functionality for generating AWS API request signatures.
- Elastic Cloud Compute (EC2) is a cloud computing service. segue manages EC2 instances and S3 storage, which includes a parallel version of
lapply()
for the Elastic Map Reduce (EMR) engine calledemrlapply()
. It uses Hadoop Streaming on Amazon’s EMR in order to get simple parallel computation.
- Microsoft Azure: Azure and Microsoft 365 are Microsoft’s cloud computing services.
- The Azure platform provides Paas, SaaS and IaaS and supports many different tools and frameworks, including both Microsoft-specific and third-party systems; while Microsoft 365 is a unified framework for accessing cloud data from Microsoft’s Office services, Windows and Dynamics. The AzureR package family aims to provide a suite of lightweight, powerful tools for working with Azure in R. The packages listed below are part of the family, and are also mirrored at the cloudyr project.
- Azure Active Directory (AAD) is a centralized directory and identity service. AzureAuth is an R client for AAD; use this to obtain OAuth tokens for authenticating with other Azure services, including Resource Manager and storage (see next).
- Microsoft Graph is the API framework for the Microsoft 365 platform, including Azure Active Directory and Office. AzureGraph is a low-level extensible R6-based interface to Graph. Microsoft365R is an interface to the Office part of Microsoft 365, including OneDrive and SharePoint Online.
- Azure Resource Manager (ARM) is a service for deploying other Azure services. AzureRMR is an R interface to ARM, and allows managing subscriptions, resource groups, resources and templates. It exposes a general R6 class framework that can extended to provide extra functionality for specific services (see next).
- Azure Storage Accounts are a general-purpose data storage facility. Different types of storage are available: file, blob, table, Data Lake, and more. AzureStor provides an R interface to storage. Features include clients for file, blob and Data Lake Gen2 storage, parallelized file transfers, and an interface to Microsoft’s cross-platform AzCopy command line utility. Also supplied is an ARM interface, to allow creation and managing of storage accounts. AzureTableStor and AzureQstor extend AzureStor to provide interfaces to table storage and queue storage respectively
- AzureVM creates and manages virtual machines in Azure. It includes templates for a wide variety of common VM specifications and operating systems, including Windows, Ubuntu, Debian and RHEL.
- AzureContainers provides a unified facility for working with containers in Azure. Specifically, it includes R interfaces to Azure Container Instances (ACI), Azure Docker Registry (ACR) and Azure Kubernetes Service (AKS). Create Docker images and push them to an ACR repository; spin up ACI containers; deploy Kubernetes services in AKS.
- Azure Data Explorer, also known as Kusto, is a fast, scalable data exploration and analytics service. AzureKusto is an R interface to ADE/Kusto. It includes a dplyr client interface similar to that provided by dbplyr for SQL databases, a DBI client interface, and an ARM interface for deploying and managing Kusto clusters and databases.
- Azure Cosmos DB is a multi-model NoSQL database service, previously known as Document DB. AzureCosmosR is an interface to the core/SQL API for Cosmos DB. It also includes simple bridges to the table storage and MongoDB APIs.
- Azure Computer Vision and Azure Custom Vision are AI services for image recognition and analysis. Computer Vision is a pre-trained service for handling commonly-encountered tasks, while Custom Vision allows you to train your own image recognition model on a custom dataset. AzureVision provides an interface to both these services.
- Application Insights provides application performance monitoring and and usage tracking of live web applications. AzureAppInsights allows developers of Shiny apps to include the Application Insights JS SDK in their apps for tracking performance. Not part of the cloudyr project or AzureR package family.
- Google Cloud and Google Drive:
- googledrive interfaces with Google Drive.
- googleComputeEngineR interacts with the Google Compute Engine API, and lets you create, start and stop instances in the Google Cloud.
- googleCloudStorageR interfaces with Google Cloud Storage.
- bigrquery: An interface to Google’s BigQuery.
- rrefine provides a client for the ‘Open Refine’ (formerly ‘Google Refine’) data cleaning service.
- gargle: An interface to Google APIs.
- Look in other sections of the Web Technologies task view for packages interfacing other Google products.
- Dropbox: repmis’s
source_Dropbox()
function for downloading/caching plain-text data from non-public folders. - Other Cloud Storage: boxr is a lightweight, high-level interface for the box.com API.
- Docker: analogsea is a general purpose client for the Digital Ocean v2 API. In addition, it includes functions to install various R tools including base R, RStudio server, and more. There’s an improving interface to interact with docker on your remote droplets via this package.
- crunch provides an interface to the crunch.io storage and analytics platform. crunchy facilitates making Shiny apps on Crunch.
- The cloudyr project aims to provide interfaces to popular Amazon, Azure and Google cloud services without the need for external system dependencies. Amazon Web Services is a popular, proprietary cloud service offering a suite of computing, storage, and infrastructure tools.
- pins can be used to publish data, models, and other R objects across a range of backends, including AWS, Azure, Google Cloud Storage, and Posit Connect.
Software Development
- R-hub is a collection of free services to help R package development across all architectures. rhub interfaces with R-Hub to allow you to check a package on the platform.
- GitHub: gistr works with GitHub gists (gist.github.com) from R, allowing you to create new gists, update gists with new files, rename files, delete files, get and delete gists, star and un-star gists, fork gists, open a gist in your default browser, get embed code for a gist, list gist commits, and get rate limit information when authenticated. git2r provides bindings to the git version control system and gh is a client for the GitHub API.
- GitLab: gitlabr is a GitLab-specific client.
Document and Images
- Data archiving: dataverse provides access to Dataverse, the open source research data repository software. rfigshare connects with Figshare.com. dataone provides a client for ‘DataONE’ repositories.
- Google Sheets: googlesheets4 (replaces
googlesheets
) can access private or public ‘Google Sheets’ by title, key, or URL. Extract data or edit data. Create, delete, rename, copy, upload, or download spreadsheets and worksheets. gsheet can download Google Sheets using just the sharing link. Spreadsheets can be downloaded as a data frame, or as plain text to parse manually. - imguR shares plots using the image hosting service Imgur.com. knitr also has a function
imgur_upload()
to load images from literate programming documents. - Teams, SharePoint and OneDrive: Microsoft365R provides an interface to these services, which form part of the Microsoft 365 (formerly known as Office 365) suite.
Data Processing and Visualization
- Document Processing: pdftables uses the PDFTables.com webservice to extract tables from PDFs.
- Visualization: Plot.ly is a company that allows you to create visualizations in the web using R (and Python), which is accessible via plotly. googleVis provides an interface between R and the Google chart tools.
- Other : rrefine can import to and export from the ‘OpenRefine’ data cleaning service.
Machine Learning and Translation
This list describes online services. For a more complete treatment of the topic, please see the MachineLearning CRAN Task View.
- Machine Learning as a Service: Several packages provide access to cloud-based machine learning services. OpenML is the official client for the OpenML API. clarifai is a Clarifai.com client that enables automated image description. rLTP accesses the ltp-cloud service. languagelayeR is a client for Languagelayer, a language detection API. yhatr lets you deploy, maintain, and invoke models via the Yhat REST API. datarobot works with Data Robot’s predictive modeling platform. mscsweblm4r interfaces with the Microsoft Cognitive Services Web Language Model API and mscstexta4r uses the Microsoft Cognitive Services Text Analytics REST API. rosetteApi links to the ‘Rosette’ text analysis API. googleLanguageR provides interfaces to Google’s Cloud Translation API, Natural Language API, Cloud Speech API, and the Cloud Text-to-Speech API. AzureVision provides interfaces to the Azure Computer Vision and Custom Vision image recognition services.
- Machine Translation: RYandexTranslate connects to Yandex Translate.
Spatial Analysis
This list describes online services. For a more complete treatment of the topic, please see the Analysis Spatial Data CRAN Task View.
- Geolocation/Geocoding: Services that translate between addresses and longlats. rgeolocate (archived) offers several online and offline tools. rydn is an interface to the Yahoo Developers network geolocation APIs, and ipapi can be used to geolocate IPv4/6 addresses and/or domain names using the http://ip-api.com/ API. opencage provides access to to the ‘OpenCage’ geocoding service. nominatimlite and nominatim connect to the OpenStreetMap Nominatim API for reverse geocoding. PostcodesioR provides post code lookup and geocoding for the United Kingdom. geosapi is an R client for the ‘GeoServer’ REST API, an open source implementation used widely for serving spatial data. geonapi provides an interface to the ‘GeoNetwork’ legacy API, an open source catalogue for managing geographic metadata. ows4R is a new R client for the ‘OGC’ standard Web-Services, such Web Feature Service (WFS) for data and Catalogue Service (CSW) for metadata.
- Mapping: Services that help create visual maps.
- OpenStreetMap: osmplotr extracts customizable map images.
- Google Maps: RgoogleMaps serves two purposes: it provides a comfortable R interface to query the Google server for static maps, and uses the map as a background image to overlay plots within R. mapsapi is an sf-compatible interface to Google Maps API.
- Routing: Services that calculate and optimize distances and routes.
- OpenStreetMap: osrm assists with the computation of routes, trips, isochrones and travel distances matrices.
The following packages provide an interface to its associated service, unless noted otherwise.
- Twitter: rtweet provides an interface through its API. twitterreport focuses on report generation based on Twitter data. streamR allows users to access Twitter’s filter, sample, and user streams, and to parse the output into data frames. OAuth authentication is supported. graphTweets produces a network graph from a data.frame of tweets. twitter_ideology implements a political ideology scaling measure for specified Twitter users.
- Facebook: Rfacebook
- Instagram: instaR
- LinkedIn: Rlinkedin
- Stack Exchange: stackr
- Pinterest: rpinterest
- VK: vkR the social networking site based in Russia.
- Meetup: meetupr
- Brandwatch: brandwatchR
- Hacker News: hackeRnews
- Mastodon: rtoot
- Slack: slackr
- Discourse: discgolf provides an interface to an instance of Discourse, not to the Discourse site itself.
Survey, Questionnaire, and Data Capture Tools
- REDCap:
- REDCapR and redcapAPI export and import data from a REDCap, a web application for building and managing online surveys and research databases.
- Another layer of packages provide additional extensions that to streamline many common operations, including REDCapTidieR, tidyREDCap, ReviewR, REDCapCAST, and REDCapDM.
- Qualtrics: qualtRics provide functions to interact with Qualtrics, an online survey and data collection software platform.
- Wufoo: WufooR retrieves data from Wufoo, which is another data collection tool from the SurveyMonkey company.
- formr: formr facilitates use of the formr online survey framework, which relies on R via OpenCPU.
- Experigen: Rexperigen is a client for Experigen, which is a platform for creating phonology experiments.
- Usersnap: useRsnap connects to Usersnap, a tool for collecting feedback from web application users.
- KoboToolbox: robotoolbox is a suite of utilities for accessing and manipulating data from the KoboToolbox API.
Web Analytics
The following packages interface with online services that facilitate web analytics.
- Google
- Google Adwords: RAdwords
- Google Analytics: googleAnalyticsR
- Google Trends: gtrendsR
- Azure
- Application Insights: AzureAppInsights
- Facebook Marketing: fbRads
- Smartly.io: RSmartlyIO loads Facebook and Instagram advertising data via the advertising service.
The following packages interface with tools that facilitate web analytics.
- webreadr can process various common forms of request log, including the Common and Combined Web Log formats and AWS logs.
- WebAnalytics provides tools for analysis of web application performance, workload and user population. There is some overlap with
webreadr
, but webreader focuses on reading log files, while WebAnalytics focuses on analysing them.
Publications
- Reference/bibliography/citation management: rorcid connects to the ORCID.org API, which can identify scientific authors and their publications (e.g., by DOI). rdatacite connects to DataCite, which manages DOIs and metadata for scholarly datasets. scholar extracts citation data from Google Scholar. rscopus extracts citation data from Elsevier Scopus. Convenience functions are also provided for comparing multiple scholars and predicting future h-index values. mathpix converts an image of a formula (typeset or handwritten) via Mathpix webservice to produce the ‘LaTeX’ code. zen4R connects to Zenodo API, including management of depositions, attribution of DOIs and upload of files.
- Literature: europepmc connects to the Europe PubMed Central service. pubmed.mineR is for text mining of PubMed Abstracts that supports fetching text and XML from PubMed. jstor retrieves metadata, ngrams and full-texts from Data for Research service by JSTOR. aRxiv connects to arXiv, a repository of electronic preprints for computer science, mathematics, physics, quantitative biology, quantitative finance, and statistics. roadoi connects to the Unpaywall API for finding free full-text versions of academic papers. rcrossref is an interface to Crossref’s API.
Generating Synthetic Data
- MockaRoo API: mockaRoo generates mock or fake data based on an input schema.
- RandomAPI: randNames generates random names and personal identifying information.
Sports Analytics
Many CRAN packages interact with services facilitating sports analysis. For a more complete treatment of the topic, please see the SportsAnalytics CRAN Task View.
Reproducible Research
Using packages in this Web Technologies task view can help you acquire data programmatically, which can facilitate Reproducible Research. Please see the ReproducibleResearch CRAN Task View for more tools and information:
“The goal of reproducible research is to tie specific instructions to data analysis and experimental data so that scholarship can be recreated, understood, and verified.”
Other Web Services
- Push Notifications: RPushbullet provides an easy-to-use interface for the Pushbullet service which provides fast and efficient notifications between computers, phones and tablets. pushoverr can sending push notifications to mobile devices (iOS and Android) and desktop using ‘Pushover’. notifyme can control Phillips Hue lighting.
- Automated Metadata Harvesting: oai and OAIHarvester harvest metadata using the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) standard.
- Wikipedia: WikipediR is a wrapper for the ‘MediaWiki’ API, aimed particularly at the ‘Wikimedia’ “production” wikis, such as ‘Wikipedia’. WikidataR can request data from Wikidata.org, the free knowledge base. WikidataQueryServiceR is a client for the Wikidata Query Service.
- rerddap: A generic R client to interact with any ERDDAP instance, which is a special case of OPeNDAP (https://en.wikipedia.org/wiki/OPeNDAP), or Open-source Project for a Network Data Access Protocol. Allows user to swap out the base URL to use any ERDDAP instance.
- duckduckr is an R interface to DuckDuckGo
3. Frameworks for building web-based R applications
- Model Operationalization (previously DeployR) is a Microsoft product that provides support for deploying R and Python models and code to a server as a web service to later consume.
- shiny makes it easy to build interactive web applications with R.
- dashR is a web framework which is available for Python, R and Julia, with components written in React.js.
- Other web frameworks include: fiery that is meant to be more flexible but less easy to use than shiny (reqres and routr are utilities used by fiery that provide HTTP request and response classes, and HTTP routing, respectively); rcloud provides an iPython notebook-style web-based R interface; and Rook, which contains the specification and convenience software for building and running Rook applications.
- The opencpu framework for embedded statistical computation and reproducible research exposes a web API interfacing R, LaTeX and Pandoc. This API is used for example to integrate statistical functionality into systems, share and execute scripts or reports on centralized servers, and build R based apps.
- Several general purpose server/client frameworks for R exist. Rserve and RSclient provide server and client functionality for TCP/IP or local socket interfaces. httpuv provides a low-level socket and protocol support for handling HTTP and WebSocket requests directly within R. Another related package, perhaps which httpuv replaces, is
websockets
(retired from CRAN). servr provides a simple HTTP server to serve files under a given directory based on httpuv. - Several packages offer functionality for turning R code into a web API. FastRWeb provides some basic infrastructure for this. plumber allows you to create a REST API by decorating existing R source code. beakr provides an R version of functionality found in python Flask and JavaScript Express.js.
- RDCOMClient which provides user-level access from R to other COM servers.
- radiant is Shiny-based GUI for R that runs in a browser from a server or local machine.
- The ‘Tiki’ Wiki CMS/Groupware framework has an R plugin (
PluginR
) to run R code from wiki pages, and use data from their own collected web databases (trackers). A demo: https://r.tiki.org/tiki-index.php . - whisker: Implementation of logicless templating based on ‘Mustache’ in R.
- Mustache syntax is described in http://mustache.github.io/mustache.5.html
Other Useful Packages and Functions
- JavaScript: V8 is an R interface to Google’s open source, high performance JavaScript engine. It can wrap JavaScript libraries as well as NPM packages. js wraps V8 and validates, reformats, optimizes and analyzes JavaScript code.
- Email: mailR is an interface to Apache Commons Email to send emails from within R. sendmailR provides a simple SMTP client. gmailr provides access the Google’s gmail.com RESTful API. Microsoft365R provides a client for Microsoft’s Outlook email service, both personal (outlook.com) and as part of the Microsoft 365 (formerly known as Office 365) suite.
- Mocking: webmockr stubs and sets expectations on HTTP requests. It is inspired from Ruby’s
webmock
. webmockr only helps mock HTTP requests, and returns nothing when requests match expectations. It integrates with crul and httr. See Testing for mocking with returned responses. - Testing: vcr provides an interface to easily cache HTTP requests in R package test suites (but can be used outside of testing use cases as well). vcr relies on webmockr to do the HTTP request mocking. vcr integrates with crul and httr. httptest provides a framework for testing packages that communicate with HTTP APIs, offering tools for mocking APIs, for recording real API responses for use as mocks, and for making assertions about HTTP requests, all without requiring a live connection to the API server at runtime. httptest only works with httr.
- Miscellaneous: webutils contains various functions for developing web applications, including parsers for
application/x-www-form-urlencoded
as well asmultipart/form-data
. mime guesses the MIME type for a file from its extension. rsdmx provides tools to read data and metadata documents exchanged through the Statistical Data and Metadata Exchange (SDMX) framework; it focuses on the SDMX XML standard format(SDMX-ML). robotstxt provides functions and classes for parsing robots.txt files and checking access permissions; spiderbar does the same. uaparserjs uses the JavaScript “ua-parser” library to parse User-Agent HTTP headers. rapiclient is a client for consuming APIs that follow the Open API format. restfulr models a RESTful service as if it were a nested R list.
4. Low-level operations
Tools for Working with URLs
- The
httr::parse_url()
function can be used to extract portions of a URL. TheRCurl::URLencode()
andutils::URLencode()
functions can be used to encode character strings for use in URLs.utils::URLdecode()
decodes back to the original strings. urltools can also handle URL encoding, decoding, parsing, and parameter extraction. - ipaddress facilitates for working with IP addresses and networks.
- urlshorteneR offers URL expansion and analysis for Bit.ly, Goo.gl, and is.gd. longurl uses the longurl.org API to provide similar functionality.
- gdns provides access to Google’s secure HTTP-based DNS resolution service.
Additional tools for internet communication
For specialized situations, the following resources may be useful:
- RCurl is another low-level client for libcurl. Of the two low-level curl clients, we recommend using curl. httpRequest is another low-level package for HTTP requests that implements the GET, POST and multipart POST verbs, but we do not recommend its use.
- request provides a high-level package that is useful for developing other API client packages. httping provides simplified tools to ping and time HTTP requests, around httr calls. httpcache provides a mechanism for caching HTTP requests.
- nanonext is an alternative low-level sockets implementation that can be used to perform HTTP and streaming WebSocket requests synchronously or asynchronously over its own concurrency framework. It uses the NNG/mbedTLS libraries as a backend.
- For dynamically generated webpages (i.e., those requiring user interaction to display results), RSelenium can be used to automate those interactions and extract page contents. It provides a set of bindings for the Selenium 2.0 webdriver using the ‘JsonWireProtocol’. It can also aid in automated application testing, load testing, and web scraping. seleniumPipes provides a “pipe”-oriented interface to the same.
- Authentication: Using web resources can require authentication, either via API keys, OAuth, username:password combination, or via other means. Additionally, sometimes web resources that require authentication be in the header of an http call, which requires a little bit of extra work. API keys and username:password combos can be combined within a url for a call to a web resource, or can be specified via commands in RCurl or httr2. OAuth is the most complicated authentication process, and can be most easily done using httr2.
See the 6 demos within httr, three for OAuth 1.0 (LinkedIn, Twitter, Vimeo) and three for OAuth 2.0 (Facebook, GitHub, Google). ROAuth provides a separate R interface to OAuth. OAuth is easier to to do in httr, so start there. googleAuthR provides an OAuth 2.0 setup specifically for Google web services, and AzureAuth provides similar functionality for Azure Active Directory.
Handling HTTP Errors/Codes
- fauxpas brings a set of Ruby or Python like R6 classes for each individual HTTP status code, allowing simple and verbose messages, with a choice of using messages, warnings, or stops.
- httpcode is a simple package to help a user/package find HTTP status codes and associated messages by name or number.
Security
- securitytxt identifies and parses web Security policy files.
5. Resources
CRAN packages
Related links
- Omega Project for Statistical Computing: Open-source packages from authors in (or close to) the R Core Team, especially for web-based technologies, actively developed 1998-2013.
Other resources
- CRAN Task View: Databases
- CRAN Task View: MachineLearning
- CRAN Task View: ReproducibleResearch
- CRAN Task View: Spatial
- CRAN Task View: SportsAnalytics
- GitHub Project: dashR
- GitHub Project: discgolf
- GitHub Project: feedeR
- GitHub Project: formr
- GitHub Project: gdns
- GitHub Project: ipapi
- GitHub Project: meetupr
- GitHub Project: mockaRoo
- GitHub Project: nominatim
- GitHub Project: osmplotr
- GitHub Project: randNames
- GitHub Project: rcloud
- GitHub Project: RDCOMClient
- GitHub Project: RDoubleClick
- GitHub Project: RHTMLForms
- GitHub Project: rydn
- GitHub Project: securitytxt
- GitHub Project: stackr
- GitHub Project: twitter_ideology
- GitHub Project: twitterreport
- GitHub Project: useRsnap
- GitHub Project: XMLSchema
- Google Code Project: segue