This is a huge pseudo-wiki for design patterns, how-tos, useful packages, and my opinions on a whole bunch of stuff and use-cases - in R.
This is probably best searched rather than read top to bottom.
Will constantly be a WIP.
---
# Basic patterns
There are some basic things that either use lesser-known base R, or fit pretty well with R's idioms, that are very useful:
- [https://masalmon.eu/2023/06/06/basic-patterns/](https://masalmon.eu/2023/06/06/basic-patterns/ "https://masalmon.eu/2023/06/06/basic-patterns/"):
- %||% = if(is.null(x)){x <- value}
- utils::modifyList to merge a default list of values & custom overrides
- Vectorised `if_else` out of dplyr - never realised!
- Note to self: make a `%nin%` function
- `nzchar()` - can use this to check if e.g. environment variables are set
- Partial (curried) functions: https://mikedecr.netlify.app/blog/partial_fns_ggplot/
There are some pretty standard things that R is missing that I do end up re-implementing fairly regularly. I should keep track of these more - and maybe roll my own more consistently. Ideas include:
- Implementing hash/dicts: [https://cran.r-project.org/web/packages/hash/index.html](https://cran.r-project.org/web/packages/hash/index.html "https://cran.r-project.org/web/packages/hash/index.html")
There are also some slightly-to-substantially more exotic things, like:
- {maybe} package: includes nice Nothing type - look into
- [Deferred List - A Read-Only List-Like Object with Deferred Access • deflist (bbuchsbaum.github.io)](https://bbuchsbaum.github.io/deflist/ "https://bbuchsbaum.github.io/deflist/")
# Visualisation
## Visualisation generally
Getting beyond just bar charts: https://z3tt.github.io/beyond-bar-and-box-plots/
Excellent explainer on uncertainty, how it differs between frequentist and Bayesian approaches, visualising it, and the pros and cons (risks) of different approaches: https://clauswilke.com/dataviz/visualizing-uncertainty.html#visualizing-the-uncertainty-of-point-estimates
**Use the whole buffalo**
- I have a theory...
- Some good tips in here - notably: **use all the elements**. Use some nifty tricks to build your legend into your titles etc.! [HTML CSS for R](https://albert-rapp.de/posts/16_html_css_for_r/16_html_css_for_r.html "https://albert-rapp.de/posts/16_html_css_for_r/16_html_css_for_r.html")
## {plotly}
Good for interactivity in a web (or web-like) environment. A bit fiddlier to extract static images from - theoretically there are interfaces to command-line tools, but those can be a pain (a) in general and (b) especially in locked-down (corpo/gov't) environments.
### Multi-categorical axes
- Just provide a list to the axis argument, where the first sub-list/vector is the outer group and the second sub-list/vector is the inner group:
```{r}
plotly::plot_ly(
x = list(
rep(c("a", "b", "c"), each=3),
rep(c(1,2,3), 3)
),
y = c(1,2,3,4,5,6),
type = "bar"
) %>% plotly::layout(barmode = "stack")
```
### Trace Order (Out of Order!)
Something that I ran into particularly when making maps / choropleths was the nuance of trace order.
#### Data
Using a flat data frame or tibble, such as the output of `broom::tidy()` when called on a `SpatialPolygonsDataFrame`, that looks something like:
| long | lat | order | hole | piece | group | id | Name | State | Count |
|------|-----|-------|------|-------|-------|----|------|-------|-------|
| 147 | -36 | 1 | F | 1 | 1.1 | 1 | Albury | NSW | 30 |
| 147 | -36 | 2 | F | 1 | 1.1 | 1 | Albury | NSW | 30 |
(i.e. each vertex of the local council boundary is its own row)
#### Problem
I was making a Shiny app that provided local government areas (councils) that could be clicked on to drill down to the postcodes that comprise them.
The structure was a council map, which when clicked updated the postcode data data frame, which prompted the postcode map to re-render (using the newly-filtered data).
To make the council map, I was using:
```
output$mainPlot <- renderPlotly({
ggplotobject <- ggplot(data = df.lga(),
aes(
x = long, y = lat,
text = paste0(
"Council: ", Name, "\n", "Count: ", round(Count, 0)
)
)
) +
geom_polygon(aes(group = group, fill = Count)) +
scale_fill_gradient(low = "#e3f5a4", high = "#13513e") +
theme(line = element_blank(),
axis.text = element_blank(),
axis.title = element_blank(),
panel.background = element_blank(),
plot.margin = margin(0,0,0,0, "mm")) +
coord_quickmap()
ggplotly(
ggplotobject,
tooltip = "text",
source = "LGAmap"
) %>%
config(scrollZoom = T, displayModeBar = F) %>%
layout(dragmode = "pan") %>%
return()
})
```
The issue was that when the `plotlyOutput` returned a click event, it would _not_ return the council I thought I had clicked on - I was originally anticipating that the polygons - ordered by `ID` in the data frame - would be rendered in the same order, so that when the click event returned `42` I could look up ~~the polygon with `ID == 42`~~ the 42nd polygon. No such luck: I'd be clicking on a council in New South Wales and wind up with the name/outline/data for a council in South Australia.
#### The Solution
The answer lies under the hood of `plotly`. Amongst the dark margic it weaves to turn R code into Javascript and HTML, for whatever reason it (nearly always?) re-orders traces, and this doesn't seem to be very well documented (at least not that I could find).
In this particular use-case, after inspecting the HTML output, it was re-ordering the polygons __first__ based on their fill value (i.e. `Count` in the table above), __then__ on their `ID`.
The workaround I developed was to create a lookup dictionary.
It would take the unique councils,
```
df <- df.lga() %>% distinct(Name, .keep_all = T)
```
order them by `Count` and then by `ID`,
```
df <- df[with(df, order(Count, id)), ]
```
calculate the resulting render order,
```
df$plotorder <- seq(0, nrow(df) - 1, by = 1)
```
and then the reactive that this was embedded in would return the (unique) name of the council for use in other lookups elsewhere.
## ggplot2
### Unsorted
- https://rstudio-conf-2022.github.io/ggplot2-graphic-design/ - h/t Will Mackey
- {ggridges}
- Adding Awesome icons to ggplot: [https://nrennie.rbind.io/blog/adding-social-media-icons-ggplot2/](https://nrennie.rbind.io/blog/adding-social-media-icons-ggplot2/ "https://nrennie.rbind.io/blog/adding-social-media-icons-ggplot2/")
- {ggfx}
- [https://github.com/jonocarroll/ggghost](https://github.com/jonocarroll/ggghost "https://github.com/jonocarroll/ggghost") - a better(?) way to track, trace, and re-generate ggplots?
- Insane but an interesting problem - putting LaTeX on a ggplot: [https://wjschne.github.io/posts/2023-07-23-latex-equation-in-ggplot2/](https://wjschne.github.io/posts/2023-07-23-latex-equation-in-ggplot2/ "https://wjschne.github.io/posts/2023-07-23-latex-equation-in-ggplot2/")
- Potential ggplot extensions:
- Maybe -> https://www.abdoulblog.com/posts/2023-05-31_ggtricks-intro/
- And ggh4x
- Cowplot for aligning stuff
- {ggfx} package - wowee
- {gghighlight} if I don't already have that earmarked somewhere
- also do something with {gganimate} - seems to play _really_ well with {gghighlight}
- {ggpattern}
- I definitely already have {ggrepel} written down somewhere
- (Almost) branchless {ggplot} design patterns - finally!!! [https://www.tjmahr.com/ggplot2-how-to-do-nothing/](https://www.tjmahr.com/ggplot2-how-to-do-nothing/ "https://www.tjmahr.com/ggplot2-how-to-do-nothing/")
- {ggforce} - for all kinds of useful, *presentation* functionality - I like their point on their github page: that ggplot2 is primarily aimed at exploratory data analysis, so what's missing is a number of nice data presentation things.
- [Make “Solar System” Plots With {ggsolar}](https://rud.is/b/2023/04/12/make-solar-system-plots-with-ggsolar/ "https://rud.is/b/2023/04/12/make-solar-system-plots-with-ggsolar/")
- {ggbump}: a "bump" chart to show changes in rankings over time. Pairs well with {gghighlight}.
### Facets with unused factor levels
- To drop unused factor levels _within_ facets - e.g. you only have 1-3 in facet A, and 2-4 in facet B - specify scales = "free_x" in the facet_* call: `facet_wrap(~groups,nrow = 3,scales = "free_x")`
> For future readers, `drop` drops any factor levels that weren't used in _any_ facet of the plot, while `scales` drops any factor level that wasn't used in a particular facet of the plot. This took me a while to understand from this post, so I thought I'd clarify here to save someone else the trouble.
### Facets with different variables
- So we can drop unused factor levels - maybe we can have one of the facets (column/row) be different variables with totally different factor levels?
- Yup: use facet_grid and bash data into fully long form, like:
$Facet 1 / Facet 2 Name / Facet 2 Level / Value$
- If that produces weird column widths you can also specify: `facet_grid(space = "free")`
### Correlation plot
- Use package `{ggcorrplot}`
### Building custom themes
- [RLadies on Twitter](https://twitter.com/WeAreRLadies/status/1597692302201323522?t=U3Yc-TPwd705hykMP6mwcQ&s=09)
- `{nousstyle}`
### Insets
- Decide how best to do insets
- There was that one that Matt Cowgill linked
- {ggpp} supports this too, but maybe not as elegantly?
- [GitHub - hughjonesd/ggmagnify: Create a magnified inset of part of a ggplot object](https://github.com/hughjonesd/ggmagnify "https://github.com/hughjonesd/ggmagnify")
### Plotting distributions
https://mjskay.github.io/ggdist/
> [ggdist](https://mjskay.github.io/ggdist/) is an R package that provides a flexible set of `ggplot2` geoms and stats designed especially for visualizing distributions and uncertainty. It is designed for both frequentist and Bayesian uncertainty visualization, taking the view that uncertainty visualization can be unified through the perspective of distribution visualization: for frequentist models, one visualizes confidence distributions or bootstrap distributions (see `[vignette("freq-uncertainty-vis")](https://mjskay.github.io/ggdist/articles/freq-uncertainty-vis.html)`); for Bayesian models, one visualizes probability distributions (see the [tidybayes](https://mjskay.github.io/tidybayes/) package, which builds on top of `ggdist`).
### Combining plots
- All the usual suspects: `{cowplot}`, `{ggarrange}`, `{gridExtra}`...
https://patchwork.data-imaginist.com/
> The goal of `patchwork` is to make it ridiculously simple to combine separate ggplots into the same graphic. As such it tries to solve the same problem as `[gridExtra::grid.arrange()](https://rdrr.io/pkg/gridExtra/man/arrangeGrob.html)` and `cowplot::plot_grid` but using an API that incites exploration and iteration, and scales to arbitrarily complex layouts.
# Literate programming
## rmarkdown
- Look into Shiny RMDocuments? Still trying to figure out the best lightweight way to get semi-Shiny-functionality... (see Yihui's bookdown)
- Rmarkdown: Bit.ly/RMDmarvel - great resource for getting to grips with things, especially for parameterising / templating documents
### Troubles with Tables
- Tables are nice to have in the visual editor, but result in spaghetti code and keep auto-sizing in evil ways!
- https://rstudio.github.io/visual-markdown-editing/markdown.html#line-wrapping
- If we're using tables, we can whack editor_options:markdown:wrap:N as a frontmatter item and it'll handle wrapping much better - results in stable sizing: what you do in the VizEditor remains.
- Just has 'unintended side-effects'(?) in the sense that the rest of the document is now more fixed than it would be otherwise
- E.g. resizing the knitted HTML doesn't(?) change things
## Quarto
- Still need to look into this one, but seems like a monotonic improvement over rmarkdown?
- Check out James Goldie's presentation on this (jimjamslam)
- https://github.com/jimjam-slam/svelte-in-quarto
- Pivot to Quarto and make a Nous style? [https://emilhvitfeldt.com/talk/2023-09-19-quarto-theming-positconf/](https://emilhvitfeldt.com/talk/2023-09-19-quarto-theming-positconf/ "https://emilhvitfeldt.com/talk/2023-09-19-quarto-theming-positconf/")
- If one ever needs Word-like word counts in Quarto: [GitHub - andrewheiss/quarto-wordcount: Quarto extension for calculating accurate word counts](https://github.com/andrewheiss/quarto-wordcount "https://github.com/andrewheiss/quarto-wordcount")
# Slide Decks
- `{officer}` and friends for Powerpoint
- Quarto / RMD also have some options
- "Snap slides" from Yihui:
- [https://yihui.org/en/2023/09/snap-slides/](https://yihui.org/en/2023/09/snap-slides/ "https://yihui.org/en/2023/09/snap-slides/")
- [https://cran.r-project.org/web/packages/markdown/vignettes/slides.html](https://cran.r-project.org/web/packages/markdown/vignettes/slides.html "https://cran.r-project.org/web/packages/markdown/vignettes/slides.html")
# Projects
- Cool posit cloud feature: [https://posit.co/blog/introducing-project-templates-in-posit-cloud/](https://posit.co/blog/introducing-project-templates-in-posit-cloud/ "https://posit.co/blog/introducing-project-templates-in-posit-cloud/")
# Class systems
![[R Class Systems.excalidraw.svg]]
# Encoding
# Regression / causal inference
- Marginal effects for regressions: **contrasts**
- standard regression diagnostics output via ggplot - [GitHub - graysonwhite/gglm: Grammar of Graphics for Linear Model Diagnostic Plots](https://github.com/graysonwhite/gglm "https://github.com/graysonwhite/gglm")
# Geospatial
- `{sf}` cheat sheet: https://r-spatial.github.io/sf/index.html
- R Ladies: https://m.youtube.com/watch?v=6Ka021fABxk
## Doing stuff
From [Mike Mahoney](https://www.mm218.dev/posts/2022-12-12-tools):
https://github.com/rspatial/terra
>
> I’m not going to lie: I was dreading the R spatial migration. I have so many legacy projects relying on raster and friends,14 and was expecting the transition to be an incredible headache without bringing any real benefits to my work.
>
> I could not have been more wrong. Switching workloads to terra has been a fantastic investment across our research group. The terra package is faster than raster, and benefits from over a decade15 of lessons learned from the raster package. The breadth of operations implemented is incredible as well; a weekly conversation in my lab involves someone16 asking “how do I do X?”, where X is some complex calculation that would be incredibly difficult to implement, and someone17 answering “oh, use this one-liner from terra.”
https://gdal.org/
>
> For me, 2022 was the year of CLI GDAL commands. I have now written two papers entirely on the back of shell scripts calling `gdal_calc` and `gdalwarp`.
>
> For those with normal hobbies, GDAL is a software library that describes itself as a “translator library” between raster and vector formats. In practice, however, GDAL is a full-featured raster toolkit with pretty decent vector support; a huge amount of common raster operations can be run by chaining together GDAL commands. And that’s huge, because GDAL is _fast_ and can handle much, much larger data than R can.
https://r-spatialecology.github.io/landscapemetrics/
>
> Imagine, if you will, that everyone – quite literally every single person – in your field uses tool X. X is mostly focused on calculating statistics, and because of its dominance most of those statistics are known primarily as “the X set of statistics”. Most people in your field don’t know how to calculate the statistics without X and aren’t particularly interested in trying; a “correct” statistic is one that agrees with tool X.
>
> Now, imagine tool X is closed-source, only runs on Windows, and was first released in 1995, so doesn’t exactly integrate with other software. In order to address those drawbacks, a team of scientists develop an R package that calculates the same statistics as X. This is already incredibly impressive; I cannot stress enough that everyone uses X and expects your results to match it exactly, and sometimes you just can’t figure out how to precisely match the closed-source Windows-only software. This was a big job.
>
> Now imagine that two years later, the person who wrote tool X retires and every trace of tool X is erased from the internet. This suddenly becomes a much bigger job.
>
> That’s, as best as I can tell from the outside, what happened to the team behind landscapemetrics. FRAGSTATS existed and was the standard reference for a whole boat of statistics;19 landscapemetrics provided an open-source implementation; FRAGSTATS suddenly no longer existed. I don’t want to sound like I’m criticizing FRAGSTATS here – for a very long time, that software provided an incredible service for free to a huge number of researchers, and I don’t think releasing something on the Internet creates an infinite obligation to make sure the download links never expire.
## Visualisation
From [Mike Mahoney](https://www.mm218.dev/posts/2022-12-12-tools):
https://github.com/yutannihilation/ggsflabel
>
> Putting labels on maps is a recurring meme among GIS users. This is one of those things that feels like it should not be that hard, and turns out to actually be impossible.
>
> But somehow, ggsflabel gets it… right? Almost every time? It’s magic. There’s literally no other explanation than magic. You could sell this to any university with an ArcMap subscription for thousands of dollars, but instead I installed it for free from GitHub.
https://github.com/paleolimbot/ggspatial
>
> Just like label placement, every other part of making a map is surprisingly hard. Making coordinate reference systems play nicely with plotting libraries is hard, adding directionally-aware elements to a map is hard, adding scale bars and other distance-aware elements to a map is hard.
>
> The ggspatial package makes it easier. My research group uses it extensively for our north arrows and scale bars, but the entire package is a gem. It solves a problem and does it well.
`{sugarbag}` per https://www.mattcowgill.com/posts/election_sugarbag/election_sugarbag.html
> Maps are great! But sometimes they can be a bit deceptive. This is particularly the case when the population density of a place varies greatly, as it does in Australia. Australians live in a small number of large-ish cities, with vast expanses of largely empty space in between them. This is a challenge to visualise.
- `{leaflet}` undisputed champion if you can use it (the interactive vs static problem)
- Leaflet extensions: [https://github.com/tomroh/leaflegend](https://github.com/tomroh/leaflegend "https://github.com/tomroh/leaflegend")
# Automation/ops
For report outputs:
https://github.com/rjake/headliner
> The goal of `headliner` is to translate facts into insights. Given two values, `headliner` generates building blocks for creating dynamic text. These talking points can be combined using using `glue` syntax to add informative titles to plots, section headers or other text in a report.
https://nombre.rossellhayes.com/
> **nombre** converts numeric vectors to character vectors of English words. You can use it to express numbers as cardinals (one, two, three) or ordinals (first, second, third), as well as numerators and denominators. **nombre** supports not just whole numbers, but also negatives, fractions, and ratios.
https://github.com/coolbutuseless/numberwang
> `numberwang` will convert floating point numbers (and integers) to their word representations, and vice versa.
>
> The key differentiator of this package, compared to {nombre}, is that it supports decimal representations by listing individual decimal digits.
- Look at Gmail with R:
- https://github.com/r-lib/gmailr
- Figure out who sends the most emails etc. - bring on the purge!
- Think about setting stuff up to *send* emails...?
- Good for reporting?
- Cross-reference with **blastula**?
# DevOps
## General
- Slowly and surely building decent linting (/just general code-checks?) - [https://www.rostrum.blog/2023/08/19/find-bad-names/](https://www.rostrum.blog/2023/08/19/find-bad-names/ "https://www.rostrum.blog/2023/08/19/find-bad-names/")
## APIs
https://www.rplumber.io/
> Plumber allows you to create a web API by merely decorating your existing R source code with `roxygen2`-like comments. Take a look at an example.
## Observability
https://github.com/atheriel/openmetrics/
> **openmetrics** is an opinionated client for [Prometheus](https://prometheus.io/) and the related [OpenMetrics](https://openmetrics.io/) project. It makes it possible to add predefined and custom metrics to any R web application and expose them on a `/metrics` endpoint, where they can be consumed by Prometheus services.
>
> The package includes built-in support for Plumber and Shiny applications, but is highly extensible.
https://daroczig.github.io/logger/
> A lightweight, modern and flexibly logging utility for R – heavily inspired by the `futile.logger` R package and `logging` Python module.
- The github page for this one also points to a few alternatives, with some pros and cons
# Shiny apps
## Alternative approaches
- WASM-based self-contained apps might be the way of the future - [[#webr]]
## Front-end options
- [Shiny - Towards easy, delightful, and customizable dashboards in Shiny for R with {bslib} (posit.co)](https://shiny.posit.co/blog/posts/bslib-dashboards/ "https://shiny.posit.co/blog/posts/bslib-dashboards/")
- Use {bslib}! The layouts and the card-based approach are really good for modular work as well.
- And the documentation is very useful as well: https://rstudio.github.io/bslib/index.html
## Paradigms
**Modular, Modular, Modular**
- Basically, regardless of whatever paradigm you might be following, you should get familiar with modules ASAP and start using them.
- [https://shiny.posit.co/r/articles/improve/modules/](https://shiny.posit.co/r/articles/improve/modules/ "https://shiny.posit.co/r/articles/improve/modules/")
**Recommended structure:**
- Most projects I do will have a processing/analysis aspect, which will often produce some of its own output artefacts, and then optionally a webtool.
- Goals:
- Reuse as much code as possible
- Keep analysis / code as consistent as possible
- Avoid jumping through unnecessary hoops (like potentially going to the extent of package development)
- Use {targets}
- Solution:
- Have a top-level directory that has the core pipeline and then different ways to "use" that analysis
- Folder has `_targets.R`, `run-targets.R`, `webtool.R`, `run-webtool.R`, `deploy-webtool.R` or same vibes
- So what you can do is:
- Have your top-level webtool file call `shiny::shinyAppDir` on a `webtool/` subfolder - keep the purely-webtool-related stuff (like HTML, CSS files) in that subfolder
- But also call the *same* `R/` folder
- And read data straight out of `_targets/objects` (and keep `_targets/meta/meta` so you can just use `tar_load`)
- To do this, you'll need to understand **scoping** and **architecture**
- **Architecture**
- I want a better word for this that's a bit less grandiose, but what I mean is: *which files comprise your Shiny app*.
- For instance, there's the classic "one-file" `app.R` approach, there's the folder-with-global-server-UI-scripts, and actually there's a lot of other flexibility that's just less well-documented - both in Posit docs and just the internet at large
- The `app.R` approach is actually more generic than it gets treated: any script that ends with an app object works the same, but some of the RStudio integrations etc. treat the file name "app.R" specially. You can call it anything and manually call `shiny::runApp("my-app-file.R")` to get the same behaviour.
- Within the `app.R` approach, you have some options, which *are* pretty well-documented in the `?runApp` documentation - but you'll run into headaches unless you have a good grasp on...
- **Scoping**
- This is not properly explained fully anywhere. The main Posit documentation for scoping gets close but doesn't quite say it verbatim.
- Things inside `server()` are per-session, just within server
- Things inside `app.R` (or equivalent - basically, whatever gets used by `shiny::runApp`) are in some weird environment limbo, where they're available across sessions, *but only within* `server()`
- The `global.R` script gets treated specially: it gets sourced into the global environment automatically by `{shiny}`, which means those things are available across sessions, *and across the server and the UI*
- So you can manually manage some of these things by *putting* things into the global environment within `app.R`/equivalent
- So what I suggest is:
- `my-app.R` maybe calls a bunch of stuff as though it was `global.R`, but you'll need to be careful about what's being put into what environments.
- If you just want something to be available in all sessions with just within the server, you can use the usual suspects - `source`, `targets::tar_source`, and so on - but with `envir = environment()` to make them put things into the "current" environment (which is the `runApp` environment - same as though you had just used the out-of-the-box `app.R` approach)
- If you want them available across server and UI (and therefore across all sessions), then you'll need to put things in the global environment manually.
- It then ends with `shinyApp` specifying a UI and server, or maybe `shinyAppDir` to just provide the `webtool/` directory with its own `ui.R` and `server.R` files.
- This has the nice property of keeping things tidy, but you'll need to be thoughtful about how each of these source other things - I think ideally you would source modules from `my-app.R` so that works properly - I still need to figure that out.
- It should pull a bunch of data from `_targets/objects`, in a way that supports the next step (e.g. individual `tar_load(...)` calls)
- Create `deploy-my-app.R` which uses the `appFiles` argument to selectively deploy:
- Your key files (`my-app.R`, any setup-type scripts in the top folder)
- Your shared R files in `R/`
- Your webtool files in `webtool/`
- And reads the lines of `my-app.R` to do some static analysis / regex to figure out which specific {targets} artefacts it needs (rather than deploy the *whole* cache)
- Note that the whole reason we're doing this step - rather than relying on the RStudio GUI - is because the GUI doesn't let you cherry-pick files within folders. So you'd be forced to deploy all of the cache.
- https://mastering-shiny.org/index.html
- [Medium post on the subject](https://blog.devgenius.io/which-r-shiny-framework-is-the-best-a-comparison-of-vanilla-shiny-golem-rhino-and-leprechaun-c02ad8e2aa8c) (behind login wall)
- [Series of blog posts](https://mjfrigaard.github.io/posts/vanilla-shiny/) inspired by the same
Key options
- Vanilla
- Just your usual `app.R` or `ui.R` and `server.R` files, optionally modularised
- [@mjfrigaard blogpost](https://mjfrigaard.github.io/posts/vanilla-shiny/)
- As a package (no extra framework)
- [@mjfrigaard blogpost](https://mjfrigaard.github.io/posts/my-pkg-app/)
- {golem}
- [@mjfrigaard blogpost](https://mjfrigaard.github.io/posts/my-golem-app/)
- {rhino}
- [@mjfrigaard blogpost](https://mjfrigaard.github.io/posts/my-rhino-app/)
- {leprechaun}
- [@mjfrigaard blogpost](https://mjfrigaard.github.io/posts/my-leprechaun-app/)
### Golem
- Book on production-grade shiny: https://engineering-shiny.org/index.html
### Rhino
https://appsilon.github.io/rhino/articles/explanation/what-is-rhino.html
> Rhino is an R package designed to help you build high quality, enterprise-grade Shiny applications at speed. It allows you to create Shiny apps “The Appsilon Way” - like a fullstack software engineer: apply best software engineering practices, modularize your code, test it well, make UI beautiful and think about adoption from the very beginning.
Initial thoughts on {rhino}:
- This one feels particularly not-R-like? - you get the power but also the responsibility of a fully blank web page.
- I kinda like the concept of using {box} to manage importing things - but I feel like {import} might be better.
## Nifty widget and extensions
- Shiny: queryBuilder widget: [https://github.com/hfshr/jqbr](https://github.com/hfshr/jqbr "https://github.com/hfshr/jqbr")
- AwesomeShiny or ShinyAwesome - GitHub list - https://github.com/nanxstats/awesome-shiny-extensions (need to pick through this a bit more)
## Design patterns
https://github.com/jcrodriguez1989/heyshiny
> Add Speech Recognition to your Shiny app! The `heyshiny` package provides a new Shiny input, the `speechInput()`. This new input allows your Shiny app to listen to the microphone, recognize the speech, and return it as text.
URL parameters: https://stackoverflow.com/questions/32872222/how-do-you-pass-parameters-to-a-shiny-app-via-url
Use {memoise} to cache expensive outputs at run-time?
#### Busy / status
- Showing that Shiny is busy:
- https://stackoverflow.com/questions/17325521/r-shiny-display-loading-message-while-function-is-running
- Pop up a modal with no escape button (`showModal(modalDialog("Message", footer=NULL)))`), do the work, then `removeModal()`
- Use {shinycssloaders} to show a nice loading spinner - https://github.com/daattali/shinycssloaders
#### Empty State: don't bother
So in some cases, you might be tempted to try to add handling for a default case (e.g. on app load) where things haven't been selected and therefore you don't want to run a model, display a plot, and so on.
Now, there's some approaches you could theoretically take here, which pair especially well with a modular approach (because you can keep bespoke logic close to where it's needed, without creating too much clutter):
- For example: special handling with 'managers': [https://appsilon.com/shiny-emptystate-for-shiny-apps/](https://appsilon.com/shiny-emptystate-for-shiny-apps/ "https://appsilon.com/shiny-emptystate-for-shiny-apps/")
- However, I cannot for the life of me get this to work.
- Resizing the window causing it to disappear, when using any event more complex than clicking a button. Looks like this is specifically an issue with `plotOutput()`: plots seem to be particularly unstable, and will re-render at little provocation, even when you think you've locked things down with e.g. `isolate()`. It seems
- This also doesn't play nicely with {shinycssloaders}, which I think might be a slightly greater improvement in user experience than this empty-state handling.
- You could also write minimal custom logic with `uiOutput` and `renderUI` - render a custom empty state component manually, and switch to the table/plot/etc. output when available.
- However, this also seems to break {shinycssloaders}: it would appear that whatever emits "recalculating" status only does that for the fixed `render...()` functions - you just don't get a spinner with `renderUI()`.
So unfortunately the bottom line seems to be - don't bother with empty state: pick and set a meaningful default / initial view for your users.
#### Intelligent initial conditions
Based on the above, I'm recommending you set a meaningful default state for the user to see. To do this, you'll want to do two things:
- Select the right initial condition, and make sure everything points to that appropriately;
- Ensure that any dynamic UI elements are basically manually set to the correct initial values - e.g. if you have hierarchical inputs (select a dataset > select a column) then "double up" on the code that gets the right hierarchy.
- This is to prevent the double-update that otherwise happens on launch: everything populates, including (invalid) plots and tables and other outputs, and *then* (with the correct hierarchies now populated) the "right" outputs render.
- Hadley Wickham suggest a different approach in [*Mastering Shiny*](https://mastering-shiny.org/action-dynamic.html#freezing-reactive-inputs), but this approach - simply freezing the inputs - just prevents the "flicker" of the "bad" outputs - you still actually get two updates, which means your app is just slightly less responsive.
TL;DR - it's hard to beat good curation, and as the analyst or scientist, you probably *should* be doing that curation.
## Advanced Shiny
- On complex behaviour:
- https://unleash-shiny.rinterface.com/shiny-custom-handler.html seems like a good resource for advanced web dev in Shiny
- It highlights that the `renderUI` approach to empty-state handling will have notable performance impacts in complex apps
### Shiny Performance
- See also the non-Shiny performance notes elsewhere in the doc
- [Lessons Learned with shiny.benchmark: Improving the Performance of a Shiny Dashboard (appsilon.com)](https://appsilon.com/benchmark-lessons-improving-shiny-dashboard-performance/ "https://appsilon.com/benchmark-lessons-improving-shiny-dashboard-performance/")
## In the wild
- Parliamentary Budget Office - https://www.pbo.gov.au/publications-and-data/data-and-tools/SMART
# Data and storage formats
https://ddotta.github.io/parquetize/index.html
> R package that allows to convert databases of different formats (csv, SAS, SPSS, Stata, rds, sqlite, JSON, ndJSON) to [parquet](https://parquet.apache.org/) format in a same function.
# Working with surveys
**Processing / Analysis**
- The `{survey}` package
- https://zacharylhertz.github.io/posts/2021/06/survey-package
- That one DataCamp course
**Potentially running surveys**
- Keep an eye on this one - building surveys with a markdown language: [jhelvy.com: {surveydown}: An open source, markdown-based survey framework (that doesn't exist yet)](https://www.jhelvy.com/posts/2023-04-06-markdown-surveys/ "https://www.jhelvy.com/posts/2023-04-06-markdown-surveys/")
- Last checked in late 2022 - watch this space.
# Performance
## Benchmarking
- {tictoc} to `tic()` and then `toc()` instead of saving a `Sys.time()` call at the top and then subtracting another one at the bottom
## Generally
- [collapse and the fastverse: Reflections on the Past, Present and Future - With Examples from Geospatial Data Science - R, Econometrics, High Performance (sebkrantz.github.io)](https://sebkrantz.github.io/Rblog/2023/04/12/collapse-and-the-fastverse-reflecting-the-past-present-and-future/ "https://sebkrantz.github.io/Rblog/2023/04/12/collapse-and-the-fastverse-reflecting-the-past-present-and-future/")
- [Writing performant code with tidy tools (tidyverse.org)](https://www.tidyverse.org/blog/2023/04/performant-packages/ "https://www.tidyverse.org/blog/2023/04/performant-packages/")
## Parallel processing
- `{future}`
- [GitHub - wlandau/crew: A distributed worker launcher](https://github.com/wlandau/crew "https://github.com/wlandau/crew") (good list of links to similar packages)
- [Enhancing the parallel Package • parallelly (futureverse.org)](https://parallelly.futureverse.org/ "https://parallelly.futureverse.org/")
# Scientific computing?
https://github.com/r-quantities/units
> Support for measurement units in R vectors, matrices and arrays: automatic propagation, conversion, derivation and simplification of units; raising errors in case of unit incompatibility. Compatible with the POSIXct, Date and difftime classes. Uses the UNIDATA udunits library and unit database for unit compatibility checking and conversion.
# Tables and Spreadsheets
I'm putting all the tables in one place together rather than distributing them throughout other sections - even though some are, for instance, really intended for Shiny, I think generally the search pattern would be "I am using X, and I need to pick a table solution", and sometimes the use-case will be one of the *other* options.
Spreadsheets are a little more complicated: really they're a special case of a table, but I think some of the out-of-the-box-CRUD solutions for Shiny both (a) should go here, and (b) deserve special recognition!
**Static**
- `{kableExtra}`
- `{gt}` in 0.10.0 has really come a long way - very interesting possibilities: https://github.com/rstudio/gt/releases/tag/v0.10.0
**Javascript**
- ~~Probably [DataTables](https://rstudio.github.io/DT/), the JS library
- Reactable!!! https://glin.github.io/reactable/articles/examples.html
**More advanced**
- OMG: out-of-the-box CRUD for Shiny - [https://github.com/openanalytics/editbl](https://github.com/openanalytics/editbl "https://github.com/openanalytics/editbl")
**Other**
- Like outputting to spreadsheets...
- `{writexl}` is extremely functional, with no dependencies (but limited functionality, accordingly)
- `{openxlsx}` I haven't actually used much; more versatile, but with a Java dependency.
- [Steve’s Data Tips and Tricks - Styling Tables for Excel with {styledTables} (spsanderson.com)](https://www.spsanderson.com/steveondata/posts/rtip-2023-04-11/index.html "https://www.spsanderson.com/steveondata/posts/rtip-2023-04-11/index.html")
**RStudio plugins...**
- {DataEditR} as an RStudio plugin for editing data
- Hmm, probably {editData} is close to an Excel-in-RStudio solution
# Working with databases
## Connections
https://rstudio.github.io/pool/
> The goal of the **pool** package is to abstract away the challenges of database connection management, which is particularly relevant in interactive contexts like Shiny apps that connect to a database.
>
> Instead of creating and closing connections yourself, you create a “pool” of connections, and the pool package manages them for you. You never have to create or close connections directly: The pool knows when it should grow, shrink or keep steady. You only need to close the pool when you’re done. The pool works seamlessly with DBI and dplyr, so in most cases using the pool package is as simple replacing `DBI::dbConnect()` with `dbPool()` and adding a call to `poolClose()`.
- The `{keyring}` package
# Better workflows
## {targets}
- What more is there to say?
- Actually a lot - that's covered over at [[Data Pipelines]] because it has applications beyond just R.
### Design patterns
- Need to save an object e.g. commit it to a repository because it can't be fully reconstructed maybe (e.g. client database)? **Don't** depend on `meta/meta` and `objects/...` - just create a target that loads it from e.g. `my-project/raw-data`; maybe throw in a `tar_hook` to calculate freshness manually. The `meta` approach can run into headaches when things change - e.g. if for some reason you have instability in the hashes between people (happening a bunch at Nous - Peter Ellis and Imalka Rangala).
### Known gaps
- [Saving ggplot2 objects](https://ropensci.org/blog/2022/12/06/save-ggplot2-targets/)
- Dealing with database connections: still pretty much an open question
- `{keyring}`?
- Some other sort of external credential store, e.g. cloud-provided?
- Instability in `meta` hashes when there shouldn't be
- See above 'design patterns' re: difficulties at Nous
- Generators / iterators / comprehension: [https://jcarroll.com.au/2023/08/18/taking-from-infinite-sequences/](https://jcarroll.com.au/2023/08/18/taking-from-infinite-sequences/ "https://jcarroll.com.au/2023/08/18/taking-from-infinite-sequences/")
- Was having this problem using this in some attempted parallelisation in targets: https://stackoverflow.com/questions/70230595/r-iterator-example-crashes
- I think this is mostly resolved by using the native parallelisation - using `tar_map` type patterns with the (new as of 2023ish) `{crew}` approach.
## Alternatives / Supplements
https://memoise.r-lib.org/
> The memoise package makes it easy to memoise R functions. **Memoisation** ([https://en.wikipedia.org/wiki/Memoization](https://en.wikipedia.org/wiki/Memoization)) caches function calls so that if a previously seen set of inputs is seen, it can return the previously computed output.
# Modelling
- `{tidymodels}`
- `{recipes}`
- `{yardstick}`
https://hendersontrent.github.io/correctR/ (by Trent @ Nous!)
> Often in machine learning, we want to compare the performance of different models to determine if one statistically outperforms another. However, the methods used (e.g., data resampling, k-fold cross-validation) to obtain these performance metrics (e.g., classification accuracy) violate the assumptions of traditional statistical tests such as a t-test. The purpose of these methods is to either aid generalisability of findings (i.e., through quantification of error as they produce multiple values for each model instead of just one) or to optimise model hyperparameters. This makes them invaluable, but unusable with traditional tests, as Dietterich (1998) found that the standard t-test underestimates the variance, therefore driving a high Type I error. correctR is a lightweight package that implements a small number of corrected test statistics for cases when samples are not independent (and therefore are correlated), such as in the case of resampling, k-fold cross-validation, and repeated k-fold cross-validation. These corrections were all originally proposed by Nadeau and Bengio (2003). Currently, only cases where two models are to be compared are supported.
# Testing, Asserting, and Error Handling
## Debugging
### Messaging
Progess monitoring:
- `{progressr}`
Outputting messages, warnings, errors; logging:
- `{logger}` definitely my favourite so far, though still some headaches.
## Actual testing
- Thinking about how to go about this:
- https://www.reddit.com/r/dataengineering/comments/i0ibhz/how_do_you_write_unit_tests_for_a_data/
- `{assertthat}`
- `{testthat}`
- Some very cool new features in 3.2.0: https://www.tidyverse.org/blog/2023/10/testthat-3-2-0/
- Facts about data, mid-pipe-chain: [`{assertr}`](https://docs.ropensci.org/assertr/)
- [GitHub - DavZim/dataverifyr: A Lightweight, Flexible, and Fast Data Validation Package that Can Handle All Sizes of Data](https://github.com/DavZim/dataverifyr "https://github.com/DavZim/dataverifyr")
- VERY cool: shinytest - [Getting started with shinytest • shinytest (rstudio.github.io)](https://rstudio.github.io/shinytest/articles/shinytest.html "https://rstudio.github.io/shinytest/articles/shinytest.html")
# Package/dependency management
- {capsule} vs {renv} - https://milesmcbain.micro.blog/2022/06/04/i-really-should.html
- Nice wrapper for downloading specific versions of packages based on date - not sure how relevant this is if using capsule or packrat or renv: [https://github.com/r-suzuki/dateback](https://github.com/r-suzuki/dateback "https://github.com/r-suzuki/dateback")
- [James Goldie - Dev containers with R and Quarto](https://jamesgoldie.dev/writing/dev-containers-in-r/ "https://jamesgoldie.dev/writing/dev-containers-in-r/")
- {import} for Python-style minimal imports (and local non-package code to boot!)
- Potentially convenient for more selectively loading things from packages - just a bit nicer than `x <- package::x`
- *Very* useful for unlocking much better ability to reuse raw R scripts: enables capability refactoring *before getting to the level of package development*.
- E.g.: a more monorepo approach?
- E.g.: better ability to bundle proprietary code with client projects (don't need to ship the entire internal package)?
Here's an idea: rather than try to
- Maintain a proper package of utilities
- *High maintenance - large overhead*
- Leave different versions strewn about in code without a central / reference version
- *Low reusability*
- Keep things in a Git repo
- *If kept in a monorepo: would then have to manage merging changes up & downstream*
- *Else: similar to this approach*
What if we just kept stuff with our notes and merge changes as we come back to them?
- Bit of a mix of the above approaches
- Good balance of portable and centralised?
- *Could* probably use the markdown file as-is: just drop it into an R project and read in the chunks
# Development environments
- Look, RStudio is just definitely the best default.
- Oh interesting - for R on the phone, why not WASM/{webr}? [[#webr]]
# Graphics devices
- Learning about graphics devices in R...
- {ragg} seems like the way to go
- It looks like this gets used by default by officer and knitr...? Though I can't quite tell. Their CRAN pages list ragg as a reverse import or whatever, but googling 'r knitr default graphics device' isn't super clear.
- Anyway, this was inspired by nousstyle expecting fonts to be installed with extrafont - in a way that was no longer happening. Googling *that* led me down a rabbithole: apparently fonts have been handled better *in the last couple years* and now {systemfonts} is the way to go, but it looks like *this* only gets picked up properly by {ragg}...
- Ah: and specifically, {rvg} *is a graphics device* for MS Office SVGs. So when we call rvg::whatever when using officer, we're manually specifying a graphics device that seems to play nicely with Windows fonts (though it seems to be built on top of grDevices... go figure)
# webr
- This seems like a big and promising enough technology to warrant its own section...
- WASM as dev environment: [https://gws.quarto.pub/introduction-to-webr-2023/#/webr-repl-app](https://gws.quarto.pub/introduction-to-webr-2023/#/webr-repl-app "https://gws.quarto.pub/introduction-to-webr-2023/#/webr-repl-app")
- "experiments" with webR (look into): [🧪 🕸️ hrbrmstr's WebR Experiments Index (rud.is)](https://rud.is/webr-experiments/ "https://rud.is/webr-experiments/")
- https://jamesgoldie.dev/writing/your-new-r-package-webr/
- https://www.tidyverse.org/blog/2023/08/webr-0-2-0/
- {Shinylive}
- rostrum.blog on both {shinylive} and "Pseudo-apps in the browser"
- https://www.rostrum.blog/posts/2024-01-20-webr-remote/
- https://www.rostrum.blog/posts/2023-10-08-govspeakify-tables/
# Unsorted
gistr::gist_create() - seems very useful!
https://github.com/teunbrand/legendry - nifty macro-like package for guides. From the author of {ggh4x}.
- https://dl.acm.org/doi/10.1145/3340670.3342426
- https://djsir.vic.gov.au/what-we-do/employment-and-small-business/victorian-labour-force - example of Shiny deployed in the wild
- https://analysisfunction.civilservice.gov.uk/policy-store/reproducible-analytical-pipelines-strategy/
- Hmmmm - what's going on with the {vvshiny} package?
- This might be nice - especially just to quickly grab a first draft client colour palette: [https://cran.r-project.org/web/packages/colouR/vignettes/colouR.html](https://cran.r-project.org/web/packages/colouR/vignettes/colouR.html "https://cran.r-project.org/web/packages/colouR/vignettes/colouR.html")
- Write R extensions in Rust???? [https://github.com/dbdahl/cargo-framework](https://github.com/dbdahl/cargo-framework "https://github.com/dbdahl/cargo-framework")
- ML / other purposes: tokenization - [https://github.com/mlverse/tok](https://github.com/mlverse/tok "https://github.com/mlverse/tok")
- _Outputting to TeX:_ [https://github.com/daqana/tikzDevice](https://github.com/daqana/tikzDevice "https://github.com/daqana/tikzDevice")
- Forward on to Python/R questions (@BL): [https://cran.r-project.org/web/packages/reticulate/vignettes/arrays.html](https://cran.r-project.org/web/packages/reticulate/vignettes/arrays.html "https://cran.r-project.org/web/packages/reticulate/vignettes/arrays.html")
- _Very_ cool: use a static parquet file hosted somewhere as a database - [https://r.iresmi.net/posts/2023/fast_remote_parquet/index.html](https://r.iresmi.net/posts/2023/fast_remote_parquet/index.html "https://r.iresmi.net/posts/2023/fast_remote_parquet/index.html")
- Random forests, linear trees, gradient boosting: [GitHub - forestry-labs/Rforestry](https://github.com/forestry-labs/Rforestry "https://github.com/forestry-labs/rforestry")
- Nice package usage: [Quantum Jitter - Usedthese](https://www.quantumjitter.com/blog/usedthese/ "https://www.quantumjitter.com/blog/usedthese/")
- Other sources:
- [Quantum Jitter - Favourite Things](https://www.quantumjitter.com/project/box/ "https://www.quantumjitter.com/project/box/")
- To mine for interesting tools - https://r4stats.com/2023/06/07/update-to-data-science-software-popularity/
- The `easystats` ecoysystem? e.g. the {see} package - [GitHub - easystats/see: Visualisation toolbox for beautiful and publication-ready figures](https://github.com/easystats/see "https://github.com/easystats/see")
- chatgpt (BYO API key) in the console - with some editor replacement features...? [GitHub - jcrodriguez1989/chatgpt: Interface to ChatGPT from R](https://github.com/jcrodriguez1989/chatgpt "https://github.com/jcrodriguez1989/chatgpt")
- Lightweight querying/uploading/downloading - [Introducing octopus: An R Package for Databases | by Marcus Codrescu | Mar, 2023 | Dev Genius](https://blog.devgenius.io/introducing-octopus-a-database-management-tool-built-with-r-efba560288c8 "https://blog.devgenius.io/introducing-octopus-a-database-management-tool-built-with-r-efba560288c8")
- https://luisdva.github.io/rstats/package-comments/
- Should we get in on the R Consortium? Seems like there's room: [Adoption of R by Actuaries Community in Melbourne - R Consortium (r-consortium.org)](https://www.r-consortium.org/blog/2023/04/06/adoption-of-r-by-actuaries-community-in-melbourne "https://www.r-consortium.org/blog/2023/04/06/adoption-of-r-by-actuaries-community-in-melbourne")
- Set up learning by automating classroom / agenda / etc. setup -> [Designing automated workflows, a chat with Dr. Sean Kross (openscapes.org)](https://www.openscapes.org/blog/2023/04/06/kyber/ "https://www.openscapes.org/blog/2023/04/06/kyber/") (maybe applications for Analytics Ready...?)
- [Do You Have to get() Objects? - Yihui Xie | 谢益辉](https://yihui.org/en/2023/04/get-objects/ "https://yihui.org/en/2023/04/get-objects/")
- [GitHub - NikKrieger/withdots: Put `...` in a Function's `args`](https://github.com/NikKrieger/withdots "https://github.com/NikKrieger/withdots")
- [GitHub - NSAPH-Software/CRE: The Causal Rule Ensemble Method](https://github.com/NSAPH-Software/CRE "https://github.com/NSAPH-Software/CRE")
- [GitHub - statisfactions/simpr: Tidyverse-friendly simulations and power analysis](https://github.com/statisfactions/simpr/ "https://github.com/statisfactions/simpr/")
- https://github.com/hunzikp/MapColoring
- BBC's cookbook for how they do viz/journalism in R: https://bbc.github.io/rcookbook/
- R for the Raspberry Pi: https://r4pi.org/
- Checking for names / object existence:
- `utils::hasName`
- `rlang::has_name`
- `testthat::expect_named`
- {ShinyUiEditor}
- Using QR codes in R: {opencv}
- More attempts at monads / decorators / pipeline logging:
- {chronicler}
- https://collateral.jamesgoldie.dev/
- [https://posit.co/blog/posit-cheatsheets-now-in-html/](https://posit.co/blog/posit-cheatsheets-now-in-html/ "https://posit.co/blog/posit-cheatsheets-now-in-html/")
- [https://www.sumsar.net/blog/git-repo-to-stash-throwaway-code/](https://www.sumsar.net/blog/git-repo-to-stash-throwaway-code/ "https://www.sumsar.net/blog/git-repo-to-stash-throwaway-code/")
- [https://www.jumpingrivers.com/blog/shiny-app-start-up-google-lighthouse-part-2/](https://www.jumpingrivers.com/blog/shiny-app-start-up-google-lighthouse-part-2/ "https://www.jumpingrivers.com/blog/shiny-app-start-up-google-lighthouse-part-2/")
- [https://www.youtube.com/watch?v=8_k-iPwcleU](https://www.youtube.com/watch?v=8_k-iPwcleU "https://www.youtube.com/watch?v=8_k-ipwcleu")
- [https://github.blog/2022-06-30-write-better-commits-build-better-projects/](https://github.blog/2022-06-30-write-better-commits-build-better-projects/ "https://github.blog/2022-06-30-write-better-commits-build-better-projects/")
- [https://github.com/hafen/geofacet](https://github.com/hafen/geofacet "https://github.com/hafen/geofacet")
- [https://www.spsanderson.com/steveondata/posts/2023-11-29/index.html](https://www.spsanderson.com/steveondata/posts/2023-11-29/index.html "https://www.spsanderson.com/steveondata/posts/2023-11-29/index.html")
- https://benharrap.com/post/2024-01-19-rstudio-noodle-theme/
- ohshitgit.com and {saperlipopette}
- Jumping Rivers - arrow reading writing feather parquet (blog post - particularly relevant for the partitioning bit, which I should master. Should write something about {arrow} too.)
- Miles McBain: {fnmate} and {tflow} - the latter is interesting as we've converged on some design principles.
- Figure out what r-universe actually... is?
- {dfmirroR} package to turn real data into fake data, and automatically generate the code for that fake data a la a reprex
- {bscui}: make really really cool SVG widgets - almost like their own little {leaflet} widgets - that work in Quarto, Shiny...
- r-sassy.org: a universe for adapting SAS work to R. While I am disgusted, I also respect the effort to reduce transaction costs and accelerate the transition.
- What madness is this? - {excel.link}: read/write directly from/to a live Excel sheet??
- {shiny.semantic} - yet more alternatives for building Shiny UIs
- {fabricatr} - for fake data
- PCA explanation, visualisation using AoE2: https://luisdva.github.io/rstats/aoe-PCA/
- Huh - I was very close with {OzMap}! First commit for that was Dec 2019, while first commit for {absmapsdata} was Feb 2019. 🥂 wfmackey!
- Python for R users: https://emilyriederer.netlify.app/post/py-rgo/
# To catch up on some time
## April 2025
- Make a note that R 4.5.0 (newly released) [introduces](https://www.r-bloggers.com/2025/04/whats-new-in-r-4-5-0/) `use()`, a tight alternative to {box} and {import}. Need to investigate that more and upgrade to use it!