Beware the ticks of R Markdown

This will be a very short post. It addresses a problem I had and I have seen others ask about on various forums (but with no answer).

The problem

The icons have disappeared from the top of some of my R chunks in my R Markdown document.

Notice how there is no settings icon, or the little green run icon in the second R chunk. If you have eagle eyes you will see why.

The solution

If you look very closely in Figure 1, you will see there is an extra set of back ticks at the end of the chunk. Remove these and you fix the problem.

This might seem blindingly obvious to some, but it is really easy to miss in a large Markdown document. It also has follow on consequences in that it makes it harder to run lines of code in a chunk—you have to select them before using Cmd/Ctrl-Enter.

But wait! There’s more!

This just in, highlighted by Twitter pal Michael MacAskill (@Sakkaden). A space between the  and the {r} can cause the same problem.

Thanks Michael!

Producing a biplot with labels

If you are hoping for blinding insight in this post, then I think you better stop reading now. A friend asked me how to show him how to display the observation labels (or class labels) in a PCA biplot. Given how long prcomp has been around, this is hardly new information. It might now even be a feature of plot.prcomp. However, for posterity, and for the searchers, I will give some simple solutions. I will, for the sake of pedagogy, even stop effing and blinding about how the Fisher’s Iris data set deserves to be left to crumble into a distant memory of when we used to do computing with a Jacquard Loom.

I will make use of tidyverse functions in this example, because kids these days. I would not worry about it too much. It just makes some of the data manipulation slightly more transparent. Firstly I will load the data. The Iris data set is an internal R data set so the data command will do it.

data(iris)
names(iris)

## [1] "Sepal.Length" "Sepal.Width"  "Petal.Length" "Petal.Width"
## [5] "Species"

Now I will put out the labels into a separate vector

iris.labels = iris %>%
pull(Species) %>%
as.character()
iris.data = iris %>%
select(-Species)


Performing the PCA itself is pretty straightforward. I have chosen to scale the variables in this example for no particular reason. The post is not about PCA so it does not matter too much.

pc = prcomp(iris.data, scale. = TRUE)


The scores—the projected values of the observations—are stored in a matrix called pc$x. First we will produce a plot with just observation number. plot(pc$x[,1:2], type = 'n')
text(pc$x[,1:2], labels = 1:nrow(iris.data))  It would be nice to put the species labels on instead of numbers. The species names are rather long though, so I am going to recode them iris.labels = recode(iris.labels, 'setosa' = 's', 'versicolor' = 'v', 'virginica' = 'i')  And now we can plot the labels if we want:  plot(pc$x[,1:2], type = 'n')
text(pc$x[,1:2], labels = 1:nrow(iris.data))  Hey you promised us ggplot2 ya bum! Okay, okay. To do this we need to coerce the scores into a data.frame. The nice thing about doing this is that the principal components are conveniently labelled PC1, PC2, etc. This makes the mapping fairly easy. library(ggplot2) pcs = data.frame(pc$x)
p = pcs %>%
ggplot(aes(x = PC1,
y = PC2,
label = iris.labels)) +
geom_text()
p


Et voila!

Adding multiple regression lines to a faceted ggplot2 plot

Those of you who know me well know I am not really a ggplot2 type of guy. Not because I do not think that it is good work—I would be an idiot to claim that probably the most downloaded package in the history of CRAN is not good—but because I predate both Hadley and ggplot2 (yes I admit it, I am a dinosaur now) I can do most of what I want in base R. However, there are times in exploratory data analysis where going the ggplot2 route makes more sense. One situation where that happens for me is when I am fitting linear models with a mixture of continuous explanatory variables and factors. Some people have an old-fashioned name for this situation—but really repeating that name would be a get-burned-at-a-stake type of offence. It is useful when fitting linear models of this type to be able to see the behaviour of the response with respect to a continuous predictor for each combination of the levels of the factors. ggplot2 makes it fairly easy to produce this type of plot through its faceting mechanism. It also makes it really to add a fitted line with a pretty confidence interval to each facet. I should say at this point that this is not restricted to linear models, and in fact works for generalised linear models as well, and for semi-parametric models like smoothers. It is cool stuff.

I want a pony, and I want it NOW Daddy

This is, as I have said, made easy to do in ggplot2 and a half hour of Googling will get you to the point where you can do it with your data. However, I found myself with the following problem. I had a situation where there was a suggestion that an interaction might be significant and so I wanted to explore visually how the fitted models differed with and without interaction. You often find yourself in this situation with tests suggesting the interactions are significant only to find that it is driven by one combination of the factors, or even worse, by a single outlier. Making things explicit, in this problem I was interested in the difference between this model:

$$Y \sim X * F_1 * F_2$$

and this model

$$Y \sim X * F_1 + F_2$$

These models are expressed in standard Wilkinson and Rogers notation, where $$Y$$ and $$X$$ are continuous, and $$F_1$$ and $$F_2$$ are factors. However, the same issues arise if we are interested in

$$Y \sim X * F$$

and this model
$$Y \sim X + F$$

The only difference is that the different levels of the factor $$F$$ can be represented on a single row—if there are not too many levels—or in a wrapped facet plot if there are quite a few levels.

We need some data

The data in this post come from Hand, D.J., Daly, F., Lunn, A.D., McConway, K.J. & Ostrowski, E. (1994). A Handbook of Small Data Sets Boca Raton, Florida: Chapman and Hall/CRC. The data try to relate the mortality in UK towns and cities—measured as deaths per 100,000 head of population—to the hardness of the water—measured by calcium concentration (in parts-per-million = ppm). The original authors also had the theory that this relationship changed when the town or city was in “The North.” I have always found the English concept of “Up North” rather bizarre. As you leave the outskirts of London you start seeing road signs telling you how far it is to “The North.” In this data set, “The North” is defined as being north of the English city of Derby. The data is available here: water.csv.

water.df = read.csv("water.csv")


It is worth having a quick look at the summary data

library(skimr)
skim(water.df)

## Skim summary statistics
##  n obs: 61
##  n variables: 4
##
## Variable type: factor
##  variable missing complete  n n_unique                     top_counts
##      town       0       61 61       61 Bat: 1, Bir: 1, Bir: 1, Bla: 1
##  ordered
##    FALSE
##
## Variable type: integer
##   variable missing complete  n    mean     sd   p0  p25  p50  p75 p100
##    calcium       0       61 61   47.18  38.09    5   14   39   75  138
##  mortality       0       61 61 1524.15 187.67 1096 1379 1555 1668 1987
##      north       0       61 61    0.57   0.5     0    0    1    1    1


We can see from this that our factor north is not being treated as such. I will recode it, and since we are using ggplot2 we might as well go into the tidyverse too. I think I am going to violate a Hadley principle by overwriting the existing data, but I can live with this.

library(tidyverse)
water.df = water.df %>%
mutate(north = factor(ifelse(north == 1, "north", "south")))


The initial plot

I am going to plot mortality as a function of calcium and I am going to facet (if that is a verb) by our factor north. There is a good argument to make that because both variables are rates (deaths per 100,000 and parts per million) then we should work on a log-log scale. However, given this is not a post on regression analysis, I will not make this transformation (Spoiler: there is no evidence that the factor makes any difference on this scale which kind of makes it useless to show what I want to show ☺)

library(ggplot2)
p = water.df %>%
ggplot(aes(x = calcium, y = mortality)) + geom_point()

p = p + facet_wrap(~north)
p


It looks like there is evidence of a shift, but is there evidence of a difference in slopes?

water.fit = lm(mortality ~ calcium * north, data = water.df)
summary(water.fit)


##
## Call:
## lm(formula = mortality ~ calcium * north, data = water.df)
##
## Residuals:
##     Min      1Q  Median      3Q     Max
## -221.27  -75.45    8.52   87.48  310.14
##
## Coefficients:
##                     Estimate Std. Error t value Pr(>|t|)
## (Intercept)        1692.3128    32.2005  52.555   <2e-16 ***
## calcium              -1.9313     0.8081  -2.390   0.0202 *
## northsouth         -169.4978    58.5916  -2.893   0.0054 **
## calcium:northsouth   -0.1614     1.0127  -0.159   0.8740
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 123.2 on 57 degrees of freedom
## Multiple R-squared:  0.5909, Adjusted R-squared:  0.5694
## F-statistic: 27.44 on 3 and 57 DF,  p-value: 4.102e-11

Ignoring all model checking (it ain’t so bad STATS 201x kids), there does not appear to be much support for the different slopes model. However there is for the different intercepts model. I would like to explore this visually.

So about those lines…

What I would like to do is to place the fitted lines from each model on each facet of the plot. In addition, I would like the confidence intervals for each line to be added to each line. There might be a way to do this with geom_smooth, and I would be happy to hear about it. My solution involves three steps:

1. Fitting each of the models, and using predict to get the confidence intervals
2. Adding the fitted line from each model to each facet
3. Adding the confidence intervals from each model to each facet

Step 1—getting the data for the confidence intervals

This step involves, as previously mentioned, using the predict function (in fact predict.lm). This should be fairly straightforward. The key here is to add the interval information, and the fitted values to either a new data frame or our existing data frame. I will opt for the latter.

water.fit1 = water.fit ## interaction model
water.fit2 = lm(mortality ~ calcium + north, data = water.df)

water.pred1 = predict(water.fit1, water.df, interval = "confidence")
water.pred2 = predict(water.fit2, water.df, interval = "confidence")

water.df = water.df %>%
mutate(fit1 = water.pred1[,"fit"],
lwr1 = water.pred1[,"lwr"],
upr1 = water.pred1[,"upr"],
fit2 = water.pred2[,"fit"],
lwr2 = water.pred2[,"lwr"],
upr2 = water.pred2[,"upr"])


Step 2—Adding the fitted lines

Remember that our plot is stored in the variable p. We will add the fitted lines using the geom_line function. We could have done this with the geom_abline and just the coefficients, however this would have made the method less flexible because we could not accomodate a simple quadratic model for example.

p = p + geom_line(data = water.df,
mapping = aes(x = calcium, y = fit1),
col = "blue", size = 1.2)
p = p + geom_line(data = water.df,
mapping = aes(x = calcium, y = fit2),
col = "red", size = 1.2)
p


We can see already the lack of support for the different slopes model, however, let’s add the confidence intervals.

Step 3—Adding the confidence intervals

We add the confidence intervals by using the geom_ribbon function. The aesthetic for geom_ribbon requires two sets of y-values, ymin and ymax. These, clearly, are the values we calculated for each of the confidence intervals. We set the color of the bands by setting fill. Setting col controls the border colour of the ribbon. We make these transparent (so they do not obscure our fitted lines), by setting an alpha value closer to zero than to one (0.2 seems to be a good choice).

p = p + geom_ribbon(data = water.df,
mapping = aes(x = calcium, ymin = lwr1, ymax = upr1),
fill = "blue", alpha = 0.2)
p = p + geom_ribbon(data = water.df,
mapping = aes(x = calcium, ymin = lwr2, ymax = upr2),
fill = "red", alpha = 0.2)
p


As expected the intervals completely overlap, thus supporting our decision to choose the simpler model.

Well that sucked

How about if there really was a slope effect? I have artificially added one, and redrawn the plot just so we can see what happens when there really is a difference.

## just so we all get the same result
set.seed(123)

## get the original coefficient
b = coef(water.fit1)

## and alter the effect for the interaction
b[4] = -5 # make a big difference to the slopes

## get the standard deviation of the residuals
sigma = summary(water.fit1)$sigma ## and use it to make a new set of responses water.df = water.df %>% mutate(mortality = (b[1] + (north == 'north') * b[2]) + (b[2] + (north == 'north') * b[4]) * calcium + rnorm(61, 0, sigma)) ## Now we repeat all the steps above water.fit1 = lm(mortality ~ calcium * north, data = water.df) ## interaction model water.fit2 = lm(mortality ~ calcium + north, data = water.df) water.pred1 = predict(water.fit1, water.df, interval = "confidence") water.pred2 = predict(water.fit2, water.df, interval = "confidence") water.df = water.df %&gt;% mutate(fit1 = water.pred1[,"fit"], lwr1 = water.pred1[,"lwr"], upr1 = water.pred1[,"upr"], fit2 = water.pred2[,"fit"], lwr2 = water.pred2[,"lwr"], upr2 = water.pred2[,"upr"]) p = water.df %&>% ggplot(aes(x = calcium, y = mortality)) + geom_point() p = p + facet_wrap(~north) p = p + geom_line(data = water.df, mapping = aes(x = calcium, y = fit1), col = "blue", size = 1.2) p = p + geom_line(data = water.df, mapping = aes(x = calcium, y = fit2), col = "red", size = 1.2) p = p + geom_ribbon(data = water.df, mapping = aes(x = calcium, ymin = lwr1, ymax = upr1), fill = "blue", alpha = 0.2) p = p + geom_ribbon(data = water.df, mapping = aes(x = calcium, ymin = lwr2, ymax = upr2), fill = "red", alpha = 0.2) p  One final note I suspect, but I have not checked properly, that when geom_smooth is used, that the fitted lines on each facet are derived from the subset of data in that facet, therefore the standard errors might be slightly larger (because less data is used to estimate $$\sigma$$. However, I am happy to be corrected (and if I am wrong I will remove this section :-)) Conference Programmes in R It has been a while folks! Lately – over the last month – I have been writing code that helps with the production of a timetable and abstract for conferences. The latest version of that code can be found here, the associated bookdown project here, and the final result here. Note that the programme may change a good number of times before next Sunday (December 10, 2017) when the conference starts. The code above is an improved version of the package I wrote for the Biometrics By The Border conference which worked solely with Google Sheets, by using it as a poor man’s database. I wouldn’t advise this because it is an extreme bottleneck when you constantly need to refresh the database information. The initial project was driven by necessity. I foolishly suggested we use EasyChair to collect information from speakers, not realising that the free version of EasyChair does not allow you to download the abstracts your speakers helpfully provided to you (and fair enough–they are trying to make some money from this service). Please note that if I am incorrect in this statement, I’d be more than happy to find out. A word of advice: if this is still the case, then get your participants to upload an R Markdown (or just plain text) file instead of using the abstract box on the webpage. You can download these files from EasyChair in a single zip file. The second project was also driven by necessity, but the need in this case had more to do with the sheer complexity of organising several hundred talks over a four day programme. Now that everything is in place, most changes requested by speakers or authors can be accomodated in a few minutes by simply moving the talk on the Google sheet that controls the programme, calling several R functions, recompiling the book and pushing it up to github. So how does it all work? The current package does depend on some basic information being stored on Google Sheets. The sheets for my current conference (Joint Conference of the New Zealand Statistical Association and the International Association of Statistical Computing (Asian Regional Section)) can be viewed here on Google Sheets. Not all the worksheets here are important. The key sheets are: Monday, Tuesday, Wednesday, Thursday, All Submissions, Allocations, All_Authors, Monday_Chairs, Tuesday_Chairs, Wednesday_Chairs, and Thursday_Chairs. The first four and the last four sheets (Monday-Thursday) are hand created. The colours are for our convenience and do not matter. The layout does, and some of this is hard-coded into the project. For example all though there are seven streams a day, only six of those belong directly to us as the seventh belong to a satellite conference. The code relies on every scheduled event (including meal breaks) having a time adjacent to it in the Time column. It also collects information about the rooms as the order of these rooms does change on some days. The All_Authors sheet comes from EasyChair. EasyChair provides the facility to download author email information as an Excel spreadsheet which in turn can be uploaded to Google Sheets easily. This sheet is simply the sheet labelled All in the EasyChair file. The All Submissions sheet is a copy-and-paste from the EasyChair submissions page. I probably could webscrape this with a little effort. The Allocations sheet is mostly reflective of the All Submissions sheet, but has user added categorizations that allow us to group the talks into sensible sessions. It also uses formulae to format the titles of the talks so that each title word is capitalized (this is an imperfect process), and that the name of the submitter (who we have assumed is the speaker) is appended to the title so that it can be pasted into the programme sheets. How do we get data out of Google Sheets? Enter googlesheets. The code works in a number of distinct phases. The first is capturing all the relevant information from the web in placing in a SQLite database. I used Jenny Bryan’s googlesheets package to extract the data from my spreadsheets. The process is quite straightforward, although I found that it worked a little more smoothly on my Mac than on my Windows box, although this may have more to do with the fact that the R install was brand new, and so the httr package was not installed. The difference between it being installed and not installed is that when it is installed authentication (with Google) happens entirely within the browser, whereas without you are required to copy and paste an authentication code back into R. When you are doing this many times, the former is clearly more desirable. Interaction with Google Sheets consists of first grabbing the workbook (Google Sheets does not use this term, it comes from Excel, but it encapsulates the idea that you have a collection of one or more worksheets in any one project, than the folder that contains them all is called a workbook), and then asking for information from the individual sheets. The request to see a workbook is what will prompt the authentication, e.g. library(googlesheets) mySheets = gs_ls() mySheets  The call to gs_ls will prompt authentication. It is important to note that this authentication will last for a certain period of time, and should only require interaction with a web browser the very first time you do it, and never again. Subsequent requests will result in a message along the lines of Auto-refreshing stale OAuth token. The call to gs_ls will return a list of available workbooks. A particular workbook may be retrieved by calling gs_title. For example, this function allows me to get to the conference workbook: getProgrammeSheet = function(title = "NZSA-IASC-ARS 2017"){ mySheets = gs_ls() ss = gs_title(title) return(ss) }  I can use the object returned by this function in turn to access the worksheets I want to work with using the gs_read. The functions in googlesheets are written to be compliant with the tidyverse, and in particular the pipe %>% operator. I will be the first to admit this is not my preferred way of working and I could have easily worked around it. However, in the interests of furthering my skills, I have tried to make the project tidyverse compliant and nearly all the functions will work using the pipe operator, although they take a worksheet or a database as their first argument rather than a tibble. Once I have a workbook object (really just an authenticated connection), I can then read each of the spreadsheets. This is done by a function called updateDB. The only thing worth commenting on in this function is that the worksheets for each of the days have headers which do not really resolve well into column names. They also have a fixed set of rows and columns that we wish to capture. To that end, the range is specified for each day, and the column headers are simply set to be the first seven (A–G) capital letters of the alphabet. These sheets are stored as tibbles which are then written to an internal SQLite database using the dbWriteTable function. There are eight functions (createRoomsTbl, createAffilTbl, createTitleTbl, createAbstractTbl, createAuthorTbl, createAuthorSubTbl, createProgTbl, createChairTbl) which operate on the database/spreadsheet tables to produce the database tables we need for generating the conference timetable, and for the abstract booklet. These functions are rarely called by themselves—we tend to call a single omnibus function rebuildBD. This function allows the user to refresh the information from the web if need be, and to recreate all of the tables in the database. The bottleneck in this function is retrieving the information from the internet which may take around 20-30 seconds depending on the connection speed. The database tables provide information for four functions: writeTT, writeProg, writeIndices, and writeSessionChairs. Each of these functions produces one or more R Markdown files in the directory used by the bookdown project. Bookdown, ePub and gitBook The final product is generated using bookdown. Bookdown, in explanation sounds simple. Implementation is really improved by the help of a master. I found this blog post by Sean Kross very helpful along with his minimal bookdown project from github. It would be misleading of me to suggest that the programme book was really produced using R Markdown. Small elements are in markdown, but the vast majority of the formatting is achieved by writing code which writes HTML. This is especially true of the conference timetable, and the hyperlinking of the abstracts to the timetable and other indices. The four functions listed above write out six markdown pages. These are the conference timetable, the session chairs table, four pages for each of the days of the conference, and two indices, one linking talks to authors, and one linking talk titles to submission numbers (which for the most part were issued by EasyChair). There is not a lot more to discussion involved here. Sean’s project sets up things in such a way that changes to the markdown or the yaml files will automatically trigger a rebuild. Things I struggled with Our conference had six parallel streams. Whilst it is easy enough to make tables that will hold all this information, it is very difficult to decide how best to display this in a way that would suit everyone. The original HTML tables were squashed almost to the point where there was a character per line on a mobile phone screen. We overcame this slightly by fixing the widths of the div element that holds the tables and adding horizontal scrolling. Many people found this feature confusing, and it did not necessarily translate into ePub. We then added some Javascript functionality by way of the tablesaw library. This allowed us to keep the time column persistant no matter which stream people were looking at, and it allowed better scrolling of streams that were offscreen. However, this was still as step too far for the technologically challenged. In the end we resorted to printing out the timetable. I also used excellent Calibre software to take the ePub as input and output it in other format—most usefully Microsoft Word’s .docx format. I know some of you are shuddering at this thought, but it did allow me to create a PDF with the programme timetable rotated and then create a PDF. This made the old fogeys immensely happy, and me immensely irritated, as I thought the gitBook version was quite useful. Not forgetting the abstracts Omitted from my workflow so far is mention of the abstracts. We had authors upload LaTeX (.tex) files to EasyChair, or text files. If you don’t do this, and they use EasyChair’s abstract box, then you have to find a way to scrape the data. There are downsides in doing so (and it even occurs in the user submitted files) in that some unicode text seems to creep in. Needless to say, even after I used an R function to convert all the files to markdown, we still had to do a bunch of manual cleaning. Anyway I hope someone finds this work useful. I have no intention of running a conference again for at least four years, but I would appreciate it if anyone wants to build on my work. How do I match that? This is not a new post, but a repost after a failed WordPress upgrade One of the projects I am working on is the 3rd edition of Introduction to Bayesian Statistics, a text book predominantly written by my former colleague Bill Bolstad. It will be out later this year if you are interested. One of the things which will make no difference to the reader but will make a lot of difference to me is the replacement of all the manually numbered references in the book for things like chapters, sections, tables and figures. The problem I am writing about today arose from programmatically trying to label the latter — tables and figures — in LaTeX. LaTeX’s reference system, as best I understand it, requires that you place a \label command after the caption. For example \begin{figure} \centering \includegraphics{myfig} \caption{Here is a plot of something} \label{fig:myfig} \end{figure}  This piece of code will create a label fig:myfig which I can later use in a \ref{fig:myfig} command. This will in turn be replaced at compile time with a number dynamically formatted according to the chapter and the number of figures which precede this plot. The challenge The problem I was faced with is easy enough to describe. I needed to open each .tex file, find all of the figure and table environments, check to see if they contained a caption and add a label if they did. Regular expressions to the rescue Fortunately I know a bit about regular expressions, or at least enough to know when to ask for help. To make things more complicated for myself, I did this in R. Why? Well basically because a) I did not feel like dusting off my grossly unused Perl — I’ve been clean of Perl for 4 years now and I intend to stay that way — and b) I could not be bothered with Java’s file handling routines – I want to to be able to open files for reading with one command, not 4 or 8 or whatever the hell it is. Looking back I could have used C++, because the new C+11 standard finally has a regex library and the ability to have strings where everything does not have to be double escaped, i.e. I can write R”\label” to look for a line that has a \label on it rather than “\\label” where I have to escape the backslash. And before anyone feels the urge to suggest a certain language I remind you to “Just say no to Python!” Finding the figure and table environments is easy enough. I simply look for the \begin{figure} and \begin{table} tags, as well as the environment closing tags \end{figure} and \end{table}. It is possible to do this all in one regular expression, but I wanted to capture the \begin and \end pairs. I also wanted to deal with tables and figures separately. The reason for this is that it was possible to infer the figure labels from Bill’s file naming convention for his figures. The tables on the other hand could just be labelled sequentially, i.e. starting at 1 and counting upwards with a prefix reflecting the chapter number. Lines = readLines("Chapter.tex") begin = grep("\\begin\\{figure\\}", Lines) end = grep("\\end\\{figure\\}", Lines) n = length(begin) if(n != length(end)) stop("The number of begin and end pairs don't match") ## Now we can work on each figure environment in turn for(k in 1:n){ b = begin[k] e = end[k] block = paste0(Lines[b:e], collapse = "\n") if(!grepl("\\label", block){ ## only add a label if there isn't one already ## everything I'm really talking about is going to happen here. } }  So what I needed to be able to do was find the caption inside my block and then insert a label. Easy enough right? I should be able to write a regular expression to do that. How about something like this: pattern = "\\caption\\{([^}]+)\\}  That will work most of the time, except as you will find out when the caption contains braces itself, and we have some examples that do have just that \caption{If \emph{A} is true then \emph{B} is true.'' Deduction is possible.}  My first regular expression would only match up to the end of \emph{A}, which does not help me. I need something that could, in theory match an unlimited number of inner sets of braces. Matching nested parentheses Fortunately matching nested parentheses is a well-known problem and Hadley Wickham tweeted me a Stack Overflow link that got me started. There is also a similar solution on page 330 of Jeffrey Friedl’s very useful Mastering Regular Expressions book. The solution relies on a regular expression which employs recursion. Set perl = TRUE to use PCRE (and recursion) in R To make this work in R we have to make sure the PCRE library is used, and this is done by setting perl = TRUE in the call to gregexpr This is my solution: ## insert the label after the caption pat = “caption(\\{([^{}]+|(?1))*\\})” m = gregexpr(pat, block, perl = T) capStart = attr(m[[1]], “capture.start”, TRUE)[1] capLength = attr(m[[1]], “capture.length”, TRUE)[1] strLabel = paste0(“\\”,”label{fig:”, figNumber, “}\n”) newBlock = paste0(substr(block, 1, capStart + capLength), strLabel, substr(block, capStart + capLength + 1, nchar(block))) The regular expression assigned to pat is where the work gets done. Reading the expression from left to right it says: match caption literally open the first capture group match { literally open the second capture group match one or more instances of anything that is not an open brace { or a end brace } or open the third capture group and recursively the first sub-pattern. I will elaborate on this more in a bit close the second and third capture groups and ask R to match this pattern zero or more times literally match the end brace } close the first capture group I would be the first to admit that I do not quite understand what ?1 does in this regexp. The initial solution used ?R. The effect of this was that I could match all sets of paired braces within block, but I could not specifically match the caption. As much as I understand this, it seems to limit the recursion to the outer (first) capture group. I would be interested to know. The rest of the code breaks the string apart, inserts the correct label, and creates a new block with the label inserted. I replace the first line of the figure environment block with this new string, and keep a list of the remaining line numbers so that they can be omitted when the file is written back to disk. An introduction to using Rcpp modules in an R package Introduction The aim of this post is to provide readers with a minimal example demonstrating the use of Rcpp modules within an R package. The code and all files for this example can be found on https://github.com/jmcurran/minModuleEx. What are Rcpp Modules? Rcpp modules allow R programmers to expose their C++ class to R. By “expose” I mean the ability to instantiate a C++ object from within R, and to call methods which have been defined in the C++ class definition. I am sure there are many reasons why this is useful, but the main reason for me is that it provides a simple mechanism to create multiple instances of the same class. An example of where I have used this is my multicool package which is used to generate permutations of multisets. One can certainly imagine a situation where you might need to generate the permutations of more than two multisets at the same time. multicool allows you to do this by instantiating multiple multicool objects. The Files I will make the assumption that you, the reader, know how to create a package which uses Rcpp. If you do not know how to do this, then I suggest you look at the section entitled “Creating a New Package” here on the Rstudio support site. Important: Although it is mentioned in the text, the image displayed on this page does not show that you should change the Type: drop down box to Package w/ Rcpp. This makes sure that a bunch of fields are set for you in the DESCRIPTION file that ensure Rcpp is linked to and imported. There are five files in this minimal example. They are • DESCRIPTION • NAMESPACE • R/minModuleEx-package.R • src/MyClass.cpp • R/zzz.R I will discuss each of these in turn. DESCRIPTION This is the standard DESCRIPTION file that all R packages have. The lines that are important are: Depends: Rcpp (>= 0.12.8) Imports: Rcpp (>= 0.12.8) LinkingTo: Rcpp RcppModules: MyModule  The imports and LinkingTo lines should be generated by Rstudio. The RcppModules: line should contain the names(s) of the module(s) that you want to use in this package. I have only one module in this package which is unimaginatively named MyModule. The module exposes two classes, MyClass and AnotherClass. NAMESPACE and R/minModule-Ex.R The first of these is the standard NAMESPACE file and it is automatically generated using roxygen2. To make sure this happens you need select Project Options… from the Tools menu. It will bring up the following dialogue box: Select the Build Tools tab, and make sure that the Generate documentation with Roxygen checkbox is ticked, then click on the Configure… button and make sure that that all the checkboxes that are checked below are checked: Note: If you don’t want to use Roxygen, then you do not need the R/minModuleEx-package.R file, and you simply need to put the following three lines in the NAMESPACE file: export(AnotherClass) export(MyClass) useDynLib(minModuleEx)  You need to notice two things. Firstly this NAMESPACE explicitly exports the two classes MyClass and AnotherClass. This means these classes are available to the user from the command prompt. If you only want access to the classes to be available to R functions in the package, then you do not need to export them. Secondly, as previously noted, if you are using Roxygen, then these export statements are generated dynamically from the comments just before each class declaration in the C++ code which is discussed in the next section. The useDynLib(minModuleEx) is generated from the line #' @useDynLib minModuleEx  in the R/minModuleEx-package.R file. src/MyClass.cpp This file contains the C++ class definition of each class (MyClass and AnotherClass). There is nothing particularly special about these class declarations, although the comment lines before the class declarations, //' @export MyClass class MyClass{  and //' @export AnotherClass class AnotherClass{  , generate the export statements in the NAMESPACE file. This file also contains the Rcpp Module definition: RCPP_MODULE(MyModule) { using namespace Rcpp; class_<MyClass>( "MyClass") .default_constructor("Default constructor") // This exposes the default constructor .constructor<NumericVector>("Constructor with an argument") // This exposes the other constructor .method("print", &MyClass::print) // This exposes the print method .property("Bender", &MyClass::getBender, &MyClass::setBender) // and this shows how we set up a property ; class_<AnotherClass>("AnotherClass") .default_constructor("Default constructor") .constructor<int>("Constructor with an argument") .method("print", &AnotherClass::print) ; }  In this module I have: 1. Two classes MyClass and AnotherClass. 2. Each class class has: • A default constructor • A constructor which takes arguments from R • A print method 3. In addition, MyClass demonstrates the use of a property field which (simplistically) provides the user with simple retrieval from and assignment to a scalar class member variable. It is unclear to me whether it works for more data types, but anecdotally, I had no luck with matrices. R/zzz.R As you might guess from the nonsensical name, it is not essential to call this file zzz.R. The name comes from a suggestion from Dirk Eddelbuettel. It contains a single, but absolutely essential line of code loadModule("MyModule", TRUE)  This code can actually be in any of the R files in your package. However, if you explicitly put it in R/zzz.R then it is easy to remember where it is. Using the Module from R Once the package is built and loaded, using the classes from the module is very straightforward. To instantiate a class you use the new function. E.g. m = new(MyClass) a = new(AnotherClass)  This code will call the default constructor for each class. If you want to call a constructor which has arguments, then they can be added to the call to new. E.g. set.seed(123) m1 = new(MyClass, rnorm(10))  Each of these objects has a print method which can be called using the $ operator. E.g.

m$print() a$print()
m1$print()  The output is > m$print()
1.000000 2.000000 3.000000
> a$print() 0 > m1$print()
1.224082 0.359814 0.400771 0.110683 -0.555841 1.786913 0.497850 -1.966617 0.701356 -0.472791


The MyClass class has a module property – a concept also used in C#. A property is a scalar class member variable that can either be set or retrieved. For example, m1 has been constructed with the default value of bBender = FALSE, however we can change it to TRUE easily

m1$Bender = TRUE m1$print()


Now our object m1 behaves more like Bender when asked to do something 🙂

> m1$print() Bite my shiny metal ass!  Hopefully this will help you to use Rcpp modules in your project. This is a great feature of Rcpp and really makes it even more powerful. An R/Rcpp mystery This morning and today I spent almost four hours trying to deal with the fact that our R package DNAtools would not compile under Windows. The issue originated with a win-builder build which was giving errors like this:  I"D:/RCompile/recent/R/include" -DNDEBUG -I"d:/RCompile/CRANpkg/lib/3.4/Rcpp/include" -I"d:/Compiler/gcc-4.9.3/local330/include" -c DNTRare.cpp -o DNTRare.o ID:/RCompile/recent/R/include: not found  and I replicated this (far too many times) on my Windows VM on my Mac. In the end this boiled down to the presence of our Makevars file which contained only one line:  CXX_STD = CXX14  Deleting fixed the problem and it now compiles just fine. It compiles fine locally, and I am waiting for the response from the win-builder site. I do not anticipate an issue, but it would be useful to understand what was going wrong. I must admit that I have forgotten what aspects of the C++14 standard we are using, but I do know that changing line to  PKG_CXXFLAGS= -std=c++14  which I use in my multicool package gives me a different pain, with the compiler being unable to locate Rccp.h after seeing a #include  directive. Extracting elements from lists in Rcpp If you are an R programmer, especially one with a solid background in data structures or with experience in a more traditional object oriented environment, then you probably use lists to mimic the features you might expect from a C-style struct or a class in Java or C++. Retrieving information from a list of lists, or a list of matrices, or a list of lists of vectors is fairly straightforward in R, but you may encounter some compiler error messages in Rcpp if you do not take the right steps. Stupid as bro This will not be a very long article, but I think it is useful to have this information somewhere other than Stack Overflow. Two posts, one from Dirk and one from Romain contain the requisite information. The List class does not know what type of elements it contains. You have to tell it. That means if you have something like x = list(a = matrix(1:9, ncol = 3), b = 4)  in your R code and void Test(List x){ IntegerMatrix a = x["a"]; }  in your C++, then you might get a compiler error complaining about certain things not being overloaded. As Dirk points out in another post (which I cannot find right at this moment), the accessor operator for a List simply returns a SEXP. Rcpp has done a pretty good job of removing the need for us to get our hands dirty with SEXP‘s, but they are still there. If you know (and you should since you are the one writing the code and designing the data structures) that this SEXP actually is an IntegerMatrix then you should cast it as one using the as<T>() function. That is, void Test(List x){ IntegerMatrix a = as<IntegerMatrix>(x["a"]); }  So why does this work? If you look around the internet, you will see chunks of code like int b = x["b"]; NumericVector y = x["y"];  which compile just fine. Why does this work? It works because the assignment operator has been overloaded for certain types in Rcpp, and so you will probably find you do not need explicit type coercion. However, it certainly will not hurt to explicitly do so for every assignment, and your code will benefit from doing so. Generating pseudo-random variates C++-side in Rcpp It is well-known that if you are writing simulation code in R you can often gain a performance boost by rewriting parts of your simulation in C++. These days the easiest way to do that of course is to use Rcpp. Simulation usually depends on random variates, and usually great numbers of them. One of the issues that may arise is that your simulation needs to execute on the C++ side of things. For example, if you decide to programme your Metropolis-Hastings algorithm (not technically a simulation I know) in Rcpp, then you are going to need to be able to generate hundreds of thousands, if not millions, of random numbers. You can use Rcpp’s features to call R routines from within Rcpp to do this, e.g. Function rnorm("rnorm"); rnorm(100, _["mean"] = 10.2, _["sd"] = 3.2 );  (Credit: Dirk Eddelbuettell) but this has a certain overhead. C++ has had built-in in random number generation functionality since at least the C+11 standard (and probably since the C+0X standard). The random header file provides a Mersenne-Twister uniform random number generator (RNG), a Linear Congruential Generator (LCG), and a Subtract-with-Carry RNG. There is also a variety of standard distributions available, described here. Uniform random variates The ability to generate good quality uniform random variates is essential, and the mt19937 engine provides. The 19937 refers to the Mersenne Prime $$(2^{19937}-1)$$ that this algorithm is based on, and also to its period length. There are four steps required to generate uniform random variates. These are: 1. Include the random header file 2. Construct an mt19937 random number engine, and initialise it with a seed 3. Construct a $$U(0,1)$$ random number generator 4. Use your engine and your uniform random number generator to draw variates In code we would write #include <random> #include <Rcpp.h> using namespace std; using namespace Rcpp; mt19937 mtEngine; uniform_real_distribution<double> rngU; //[[Rcpp::export]] void setSeed(unsigned int seed){ mtEngine = mt19937(seed); rngU = uniform_real_distribution<>(0.0, 1.0); } double runif(void){ return rngU(mtEngine); }  The function runif can now be called with runif(). Note that the setSet function has been exported so that you can initialize the RNG engine with a seed of your choice. How about normal random variates? It does not require very much more effort to add a normal RNG to your code. We simply add normal_distribution<double> rngZ;  to our declared variables, and //[[Rcpp::export]] void setSeed(unsigned int seed){ mtEngine = mt19937(seed); rngU = uniform_real_distribution<>(0.0, 1.0); rngZ = normal_distribution<double>(0.0, 1.0); } double rnorm(double mu = 0, double sigma = 1){ return rngZ(mtEngine) * sigma + mu; }  to our code base. Now rnorm can be called without arguments to get standard ($$N(0,1)$$) random variates, or with a mean, or a standard deviation, or both to get $$N(\mu,\sigma^2)$$ random variates Rcpp does it No doubt someone is going to tell me that Romain and Dirk have thought of this already for you, and that my solution is unnecessary Morris Dancing. However, I think there is merit in knowing how to use the standard C++ libraries. Please note that I do not usually advocate having global variables such as those in the code above. I would normally make mtEngine, rngU, and rngZ private member variables a class and then either instantiate it using an exported Rcpp function, or export the class and essential functions using an Rcpp module. Working C++ code and an R test script can be found here in the RNG folder. Enjoy! Introduction to Using Regular Expressions in R R and Regular Expressions Some people, when confronted with a problem, think “I know, I’ll use regular expressions.” Now they have two problems. – Jamie Zawinski, courtesy of Jeffrey Friedl’s blog This is a blog post on using regular expressions in R. I am sure there are plenty of others out there with the same information. However, this is also an exercise for me to see how hard it is to change my knitr .Rnw files into markdown and then into HTML. It turns out that most of the work can be done by running pandoc over the LaTeX file I get from knitting my .Rnw file. The rest I did manually. What are regular expressions? Regular expressions provide a powerful and sophisticated way of matching patterns in text. For example with regular expressions I can: • Find a word or a series of letters • Do wild-card searches • Match patterns at the start or the end of a line • Make replacements in text based on the match A simple example Very often when people read about regular expressions they do not grasp the power, thinking “I could do that without using regular expressions at all.” Here is something I did the other day. I had a file whose lines consisted of the names of files containing R code. That is, I had twenty lines of text with the end of each line ending in .R. I needed to 1. insert export( at the start of each line, and 2. replace the .R at end of each line with ) You can probably think of a way of doing this with a mixture of find and replace, and manual insertion. That is all well and good if you only have 20 lines. What if you had 20,000 lines? I did this in Notepad++ by matching the regular expression ^(.*)\.[rR]$ and replacing it with export(\1)

Before After
#onewayPlot.R# \export(onewayPlot)
autocor.plot.R \export(autocor.plot)
boxqq.r \export(boxqq)
boxqq.r~ \export(boxqq)
ciReg.R \export(ciReg)
cooks20x.R \export(cooks20x)
crossFactors.R \export(crossFactors)
crossFactors.R~ \export(crossFactors)
crosstabs.R \export(crosstabs)
eovcheck.R \export(eovcheck)
estimateContrasts.R \export(estimateContrasts)
estimateContrasts1.R \export(estimateContrasts1)
estimateContrasts2.R \export(estimateContrasts2)
freq1way.r \export(freq1way)
freq1way.r~ \export(freq1way)
getVersion.R \export(getVersion)
interactionPlots.R \export(interactionPlots)
layout20x.R \export(layout20x)
levene.test.R \export(levene.test)
levene.test.R~ \export(levene.test)
multipleComp.R \export(multipleComp)
normcheck.R \export(normcheck)
onewayPlot.R \export(onewayPlot)
onewayPlot.R~ \export(onewayPlot)
pairs20x.R \export(pairs20x)
pairs20x.R~ \export(pairs20x)
predict20x.R \export(predict20x)
predict20x.R~ \export(predict20x)
propslsd.new.R \export(propslsd.new)
residPlot.R \export(residPlot)
rowdistr.r \export(rowdistr)
• Regular expressions are powerful way of describing patterns that we might search for or replace in a text document

• In some sense they are an extension of the wildcard search and replace operations you might carry out in Microsoft word or a text editor.

• To the untrained eye they look like gobbledygook!

• Most programming languages have some form of regular expression library

• Some text editors, such as Emacs, Notepad++, RStudio, also have regular expressions

• This is very useful when you don't need to write a programme

• The file utility grep uses regular expressions to find occurrences of a pattern in files

• Mastering regular expressions could take a lifetime, however you can achieve a lot with a good introduction

• A couple of very good references are:

• Jeffery Freidl's Mastering Regular Expressions, (2006), 3rd Edition, O'Reilly Media, Inc.
• Paul Murrell's Introduction to Data Technologies, (2009), Chapman & Hall/CRC Computer Science & Data Analysis.
• This entire book is on the web: http://www.stat.auckland.ac.nz/~paul/ItDT/, but if you use it a lot you really should support Paul and buy it
• Chapter 11 is the most relevant for this post

Tools for regular expressions in R

We need a set of tools to use regular expressions in something we understand — i.e. R. The functions I will make the most use of are

• grepl and grep

• gsub

• gregexpr

Functions for matching

• grep and grepl are the two simplest functions for pattern matching in R

• By pattern matching I mean being able to either

(i) Return the elements of, or indices of, a vector that match a set of characters (or a pattern)

(ii) Return TRUE or FALSE for each element of a vector on the basis of whether it matches a set of characters (or a pattern)

grep does (i) and grepl does (ii).

Very simple regular expressions

At their simplest, a regular expression can just be a string you want to find. For example the commands below look for the string James in the vector of names names. This may seem like a silly example, but it demonstrates a very simple regular expression called a string literal, informally meaning match this string — literally!

names = c('James Curran', 'Robert Smith',
'James Last')
grep('James', names)

## [1] 1 3


Wild cards and other metacharacters

• At the next level up, we might like to search for a string with a single wild card character

• For example, my surname is sometimes (mis)spelled with an e or an i

• The regular expression symbol/character for any character is the the full stop or period

• So my regular expression would be Curr.n, e.g.

surnames = c('Curran', 'Curren', 'Currin',
'Curin')
grepl('Curr.n', surnames)

## [1]  TRUE  TRUE  TRUE FALSE

• The character . is the simplest example of a regular expression metacharacter

• The other metacharacters are [ ], [^ ], \, ?, *. +,{,}, ^, $, \<, \>, | and () • If a character is a regular expression metacharacter then it has a special meaning to the regular expression interpreter • There will be times, however, when you want to search for a full stop (or any of the other metacharacters). To do this you can escape the metacharacter by preceding it with a double backslash \\. • Note that we only use two backslashes in R – nearly every other language uses a single backslash • Note that \\ followed by a digit from 0 to 9 has special meaning too, e.g. \\1 Alternation • Whilst this example obviously works, there is a more sensible way to do this and that is to use the alternation or or operator |. • E.g. grepl('Curr(a|e|i)n', c('Curran', 'Curren', 'Currin', 'Curin'))  ## [1] TRUE TRUE TRUE FALSE  • This regular expression contains two metacharacters ( and | • The round bracket ( has another meaning later on, but here it delimits the alternation. • We read (a|e|i) as a or e or i. A bigger example – counting words In this example we will use the text of Moby Dick. The R script below does the following things 1. Opens a file connection to the text 2. Reads all the lines into a vector of lines 3. Counts the number of lines where the word whale is used 4. Counts the number of lines where the word Ahab is used ## open a read connection to the Moby Dick ## text from Project Gutenberg mobyURL = 'http://www.gutenberg.org/cache/epub/2701/pg2701.txt' f1 = url(mobyURL, 'r') ## read the text into memory and close ## the connection Lines = readLines(f1) close(f1)  Note: The code above is what I would normally do, but if you do it too often you get this nice message from Project Gutenberg Don't use automated software to download lots of books. We have a limit on how fast you can go while using this site. If you surpass this limit you get blocked for 24h.  which is fair enough, so I am actually using a version I stored locally. ## Throw out all the lines before ## 'Call me Ishmael' i = grep('Call me Ishmael', Lines) Lines = Lines[-(1:(i - 1))] numWhale = sum(grepl('(W|w)hale', Lines)) numAhab = sum(grepl('(A|a)hab', Lines)) cat(paste('Whale:', numWhale, ' Ahab:', numAhab, '\n'))  ## Whale: 1487 Ahab: 491  Note:I am being explicit here about the capitals. In fact, as my friend Mikkel Meyer Andersen points out, I do not have to be. grep, grepl, regexpr and gregexpr all have an argument ignore.case which can be set to TRUE. However I did want to highlight that generally regular expressions are case sensitive. This programme will count the number of lines containing the words whale or Ahab but not the number of occurrences. To count the number of occurrences, we need to work slightly harder. The gregexpr function globally matches a regular expression. If there is no match, then gregpexr returns -1. However, if there is one or match, then gregexpr returns the position of the match, and the length of the match for each matching instance. For example, we will look for the word the in the following three sentences: s1 = 'James is a silly boy' s2 = 'The cat is hungry' s3 = 'I do not know about the cat but the dog is stupid' ## Set the regular expression ## Note: This would match 'There' and 'They' as ## well as others for example pattern = '[Tt]he' ## We will store the matches so that we ## can examine them in turn m1 = gregexpr(pattern, s1) m2 = gregexpr(pattern, s2) m3 = gregexpr(pattern, s3)  There are no matches in the first sentence so we expect gregexpr to return -1 print(m1)  ## [[1]] ## [1] -1 ## attr(,"match.length") ## [1] -1 ## attr(,"useBytes") ## [1] TRUE  which it does. In the second sentence, there is a single match at the start of the sentence print(m2)  ## [[1]] ## [1] 1 ## attr(,"match.length") ## [1] 3 ## attr(,"useBytes") ## [1] TRUE  This result tells us that there is a single match at character position 1, and that the match is 3 characters long. In the third example there are two matches, at positions 21 and 33, and they are both 3 characters long print(m3)  ## [[1]] ## [1] 21 33 ## attr(,"match.length") ## [1] 3 3 ## attr(,"useBytes") ## [1] TRUE  So in order to count the number of occurences of a word we need to use gregexpr and keep all of those instances where the result is not -1 and then count, using the length function the number of matches, e.g. ## count the number of occurences of whale pattern = '(W|w)hale' matches = gregexpr(pattern, Lines) ## if gregexpr returns -1, then the number of ## matches is 0 if not, then the number of ## matches is given by length counts = sapply(matches, function(x){ if(x[1] == -1) return(0) else return(length(x))}) cat(paste('Whale:', sum(counts), '\n'))  ## Whale: 1564  Character classes/sets or the [ ] operator • Regular expression character sets provide a simple mechanism for matching any one of a set of characters • For example [Tt]he will match The and the • The real strength of character sets is in its special range and set operators • For example the regular expressions: • [0-9] will match any digit from 0 to 9 • [a-z] will match any lower case letter from a to z • [A-Z0-9] will match any upper case letter from A to Z or any digit from 0 to 9 and so on • You may initially think that character sets are like alternation, but they are not. Character sets treat their sets as an unordered list of characters • So (se|ma)t will (fully) match set • But mat, [sema]t will not • Alternatively [sema]t will match st,at, mt and at • The POSIX system defines a set of special character classes which are supported in R and can be very useful. These are  [:alpha:] Alphabetic (only letters) [:lower:] Lowercase letters [:upper:] Uppercase letters [:digit:] Digits [:alnum:] Alphanumeric (letters and digits) [:space:] White space [:punct:] Punctuation • The regular expression [[:lower:]] will help you capture accented lower case letters like &agrave, &eacute, or &ntilde whereas [a-z] would miss all of them • You may think this is uncommon, but optical character recognition (OCR) text often has characters like this present Negated character sets – [^...] • Negated character sets provide a way for you to match anything but the characters in this set • A very common example of this is when you want to match everything between a set of (round) brackets • E.g. The regular expression ([^)]) would match any single character between a pair of round brackets Matching more than one character — quantifiers Another common thing we might do is match zero, one, or more occurrences of a pattern. We have four ways to do this 1. ? means match zero or one occurrences of the previous pattern 2. * means match zero or more occurrences of the previous pattern 3. + means match one or more occurrences of the previous pattern 4. {a,b} means match from a to b occurrences of the previous pattern* 5. b may be omitted so that {a,} means match a or more occurrences of the previous pattern 6. b and the comma may be omitted so that {a} means match exactly a occurences of the previous pattern Continuing with our misspelling example, I would like a way of picking up all of the possibilities of misspelling my surname. Variants I've seen are Curren, Currin, Curin, Curan, Curen, Curn and even Karen! If I wanted to construct a regular expression to match all of these possibilities I need to match (in this order): 1. a C or a K 2. a u or an a 3. one or two occurence of r 4. zero or more occurrences of e or i 5. and finally an n This is how I do this with regular expressions pattern = '[CK](u|a)r{1,2}(e|i)*n'; badNames = c('Curren', 'Currin', 'Curin', 'Curan', 'Curen', 'Curn', 'Karen') grepl(pattern, badNames)  ## [1] TRUE TRUE TRUE FALSE TRUE TRUE TRUE  Notice how the regular expression didn't match Curan. To fix the code so that it does match we need to change the set of possible letters before the n from (e|i) to allow a as a possibility, i.e. (a|e|i) or alternatively [aei] pattern1 = '[CK](u|a)r{1,2}(a|e|i)*n' pattern2 = '[CK](u|a)r{1,2}[aei]*n' badNames = c('Curren', 'Currin', 'Curin', 'Curan', 'Curen', 'Curn', 'Karen') grepl(pattern1, badNames)  ## [1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE  grepl(pattern2, badNames)  ## [1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE  Anchors — matching a position • The metacharacters ^, $, \< and \> match positions

• ^ and $ match the start and the end of a line respectively • \< and \> match the start and end of a word respectively • I find I use ^ and $ more often

• For example

• ^James will match "James hates the cat" but not "The cat does not like James"
• cat$ will match "James hates the cat" but not "The cat does not like James" Summary — R functions for matching regular expressions The functions grep and regexpr are the most useful. Loosely, • grep tells you which elements of your vector match the regular expression • whereas regexpr tells you which elements match, where they match, and how long each match is • gregexpr matches every occurrence of the pattern, whereas regexpr only matches the first occurrence The example below shows the difference poss = c('Curren', 'Currin', 'Curin', 'Curan', 'Curen', 'Curn', 'Karen') pattern = '[CK](u|a)r{1,2}(i|e)*n' grep(pattern, poss)  ## [1] 1 2 3 5 6 7  gregexpr(pattern, poss)  ## [[1]] ## [1] 1 ## attr(,"match.length") ## [1] 6 ## attr(,"useBytes") ## [1] TRUE ## ## [[2]] ## [1] 1 ## attr(,"match.length") ## [1] 6 ## attr(,"useBytes") ## [1] TRUE ## ## [[3]] ## [1] 1 ## attr(,"match.length") ## [1] 5 ## attr(,"useBytes") ## [1] TRUE ## ## [[4]] ## [1] -1 ## attr(,"match.length") ## [1] -1 ## attr(,"useBytes") ## [1] TRUE ## ## [[5]] ## [1] 1 ## attr(,"match.length") ## [1] 5 ## attr(,"useBytes") ## [1] TRUE ## ## [[6]] ## [1] 1 ## attr(,"match.length") ## [1] 4 ## attr(,"useBytes") ## [1] TRUE ## ## [[7]] ## [1] 1 ## attr(,"match.length") ## [1] 5 ## attr(,"useBytes") ## [1] TRUE  String substitution • Finding or matching is often only one half of the equation • Quite often we want to find and replace • This process is called string substitution • R has two functions sub and gsub • The difference between them is that sub only replaces the first occurrence of the pattern whereas gsub replaces every occurrence • Normal usage is quite straight forward. E.g. poss = 'Curren, Currin, Curin, Curan, Curen, Curn and Karen' pattern = '[CK](u|a)r{1,2}(i|e)*n' sub(pattern, 'Curran', poss)  ## [1] "Curran, Currin, Curin, Curan, Curen, Curn and Karen"  gsub(pattern, 'Curran', poss)  ## [1] "Curran, Curran, Curran, Curan, Curran, Curran and Curran"  Back substitution • Abnormal usage is probably quite unlike anything you have seen before • One of the most powerful features of regular expressions is the ability to re-use something that you matched in a regular expression • This idea is called back substitution • Imagine that I have a text document with numbered items in it. E.g. 1. James 2. David 3. Kai 4. Corinne 5. Vinny 6. Sonika  • How would I go about constructing a regular expression that would take each of the lines in my document and turn them into a nice LaTeX itemized list where the numbers are the list item markers? • The trick is to capture the numbers at the start of each line and use them in the substitution • To do this we use the round brackets to capture the match of interest • And we use the \\1 and \\2 backreference operators to retrieve the information we matched. E.g. Lines = c('1. James', '2. David', '3. Kai', '4. Corinne', '5. Vinny', '6. Sonika') pattern = '(^[0-9]\\.)[[:space:]]+([[:upper:]][[:lower:]]+$)'
gsub(pattern, '\\\\item[\\1]{\\2}', Lines)

## [1] "\\item[1.]{James}"   "\\item[2.]{David}"  "\\item[3.]{Kai}"
## [4] "\\item[4.]{Corinne}" "\\item[5.]{Vinny}"   "\\item[6.]{Sonika}"


Note the double backslash will become a single backslash when written to file.

I actually used a regular expression with back substitution to format output for LaTeX in the file name example at the start of this post. My regular expression was the following:

(^[^A-Za-z]*([A-Za-z0-9.]+)[.][rR].*)


and this was my back substitution expression

 \\verb!\1!  &  \\verb!\\export(\2)! \\\\


There is only a single \ in the back references because I just did this in the RStudio editor, not in R. Note how there are two back references, corresponding to two capture groups, one of which is nested inside the other. In nesting situtations like this, the capture groups are labelled in order from the outermost inwards.

String manipulation

We need two more things to finish this section

• The ability to extract smaller strings from larger strings

• The ability to construct strings from smaller strings

• The first is called extracting substrings

• The second is called string concatenation

• We use the functions substr and paste for these tasks respectively

substr

• substr is very simple

• Its arguments are

• the string, x,
• a starting position, start,
• and an stopping position, stop.
• It extracts all the characters in x from start to stop

• If the alias substring is used then stop is optional

• If stop is not provided then substring extracts all the characters from start to the end of x

E.g.

substr('abcdef', 2, 4)

## [1] "bcd"

substring('abcdef', 2)

## [1] "bcdef"


paste

paste is a little more complicated:

• paste takes 1 or more arguments, and two optional arguments sep and collapse

• sep is short for separator

• For the following discussion I am going to assume that I call paste with two arguments x and y

• If x and y are both scalars thenpaste(x,y) will join them together in a single string separated bya space, e.g.

paste(1, 2)

## [1] "1 2"

paste('a', 'b')

## [1] "a b"

paste('a =', 1)

## [1] "a = 1"

• If x and y are both scalars and you define sep to be "," say then paste(x, y, sep = ",") will join them together in a single string separated by a comma,
e.g.
paste(1, 3, sep = ',')

## [1] "1,3"

paste(TRUE, 3, sep = ',')

## [1] "TRUE,3"

paste('a', 3, sep = ',')

## [1] "a,3"

• If If x and y are both vectors and you define sep to be "," say then paste(x, y , sep = ",") will join each element of x with each element of y into a set of strings where the elements are separated by a comma, e.g.
x = 1:3
y = LETTERS[1:3]
paste(x, y, sep = ',')

## [1] "1,A" "2,B" "3,C"

• If collapse is not NULL, e.g “-” then each of the strings will be pasted together into a single string.
x = 1:3
y = LETTERS[1:3]
paste(x, y, sep = ',', collapse = '-')

## [1] "1,A-2,B-3,C"

• And of course you can take it as far as you like 🙂
x = 1:3
y = LETTERS[1:3]
paste('(', paste(x, y, sep = ','), ')',
sep = '', collapse = '')

## [1] "(1,A)-(2,B)-(3,C)"


paste0

Mikkel also points out that paste0 is a shortcut to avoid specifying sep = ""` everytime