Category Archives: C++

An introduction to using Rcpp modules in an R package

 

 

Introduction

The aim of this post is to provide readers with a minimal example demonstrating the use of Rcpp modules within an R package. The code and all files for this example can be found on https://github.com/jmcurran/minModuleEx.

What are Rcpp Modules?

Rcpp modules allow R programmers to expose their C++ class to R. By “expose” I mean the ability to instantiate a C++ object from within R, and to call methods which have been defined in the C++ class definition. I am sure there are many reasons why this is useful, but the main reason for me is that it provides a simple mechanism to create multiple instances of the same class. An example of where I have used this is my multicool package which is used to generate permutations of multisets. One can certainly imagine a situation where you might need to generate the permutations of more than two multisets at the same time. multicool allows you to do this by instantiating multiple multicool objects.

The Files

I will make the assumption that you, the reader, know how to create a package which uses Rcpp. If you do not know how to do this, then I suggest you look at the section entitled “Creating a New Package” here on the Rstudio support site. Important: Although it is mentioned in the text, the image displayed on this page does not show that you should change the Type: drop down box to Package w/ Rcpp.

Creating a package with Rcpp

This makes sure that a bunch of fields are set for you in the DESCRIPTION file that ensure Rcpp is linked to and imported.

There are five files in this minimal example. They are

  • DESCRIPTION
  • NAMESPACE
  • R/minModuleEx-package.R
  • src/MyClass.cpp
  • R/zzz.R

I will discuss each of these in turn.

DESCRIPTION

This is the standard DESCRIPTION file that all R packages have. The lines that are important are:

Depends: Rcpp (>= 0.12.8)
Imports: Rcpp (>= 0.12.8)
LinkingTo: Rcpp
RcppModules: MyModule

The imports and LinkingTo lines should be generated by Rstudio. The RcppModules: line should contain the names(s) of the module(s) that you want to use in this package. I have only one module in this package which is unimaginatively named MyModule. The module exposes two classes, MyClass and AnotherClass.

NAMESPACE and R/minModule-Ex.R

The first of these is the standard NAMESPACE file and it is automatically generated using roxygen2. To make sure this happens you need select Project Options… from the Tools menu. It will bring up the following dialogue box:

Project Options

Select the Build Tools tab, and make sure that the Generate documentation with Roxygen checkbox is ticked, then click on the Configure… button and make sure that that all the checkboxes that are checked below are checked:

Configuring Roxygen

Note: If you don’t want to use Roxygen, then you do not need the R/minModuleEx-package.R file, and you simply need to put the following three lines in the NAMESPACE file:

export(AnotherClass)
export(MyClass)
useDynLib(minModuleEx)

You need to notice two things. Firstly this NAMESPACE explicitly exports the two classes MyClass and AnotherClass. This means these classes are available to the user from the command prompt. If you only want access to the classes to be available to R functions in the package, then you do not need to export them. Secondly, as previously noted, if you are using Roxygen, then these export statements are generated dynamically from the comments just before each class declaration in the C++ code which is discussed in the next section. The useDynLib(minModuleEx) is generated from the line

#' @useDynLib minModuleEx

in the R/minModuleEx-package.R file.

src/MyClass.cpp

This file contains the C++ class definition of each class (MyClass and AnotherClass). There is nothing particularly special about these class declarations, although the comment lines before the class declarations,

//' @export MyClass
class MyClass{

and

//' @export AnotherClass
class AnotherClass{

, generate the export statements in the NAMESPACE file.

This file also contains the Rcpp Module definition:

RCPP_MODULE(MyModule) {
  using namespace Rcpp;

  class_<MyClass>( "MyClass")
    .default_constructor("Default constructor") // This exposes the default constructor
    .constructor<NumericVector>("Constructor with an argument") // This exposes the other constructor
    .method("print", &MyClass::print) // This exposes the print method
    .property("Bender", &MyClass::getBender, &MyClass::setBender) // and this shows how we set up a property
  ;

  class_<AnotherClass>("AnotherClass")
    .default_constructor("Default constructor")
    .constructor<int>("Constructor with an argument")
    .method("print", &AnotherClass::print)
  ;
}

In this module I have:

  1. Two classes MyClass and AnotherClass.
  2. Each class class has:
    • A default constructor
    • A constructor which takes arguments from R
    • A print method
  3. In addition, MyClass demonstrates the use of a property field which (simplistically) provides the user with simple retrieval from and assignment to a scalar class member variable. It is unclear to me whether it works for more data types, but anecdotally, I had no luck with matrices.

R/zzz.R

As you might guess from the nonsensical name, it is not essential to call this file zzz.R. The name comes from a suggestion from Dirk Eddelbuettel. It contains a single, but absolutely essential line of code

loadModule("MyModule", TRUE)

This code can actually be in any of the R files in your package. However, if you explicitly put it in R/zzz.R then it is easy to remember where it is.

Using the Module from R

Once the package is built and loaded, using the classes from the module is very straightforward. To instantiate a class you use the new function. E.g.

m = new(MyClass)
a = new(AnotherClass)

This code will call the default constructor for each class. If you want to call a constructor which has arguments, then they can be added to the call to new. E.g.

set.seed(123)
m1 = new(MyClass, rnorm(10))

Each of these objects has a print method which can be called using the $ operator. E.g.

m$print()
a$print()
m1$print()

The output is

> m$print()
1.000000 2.000000 3.000000
> a$print()
0
> m1$print()
1.224082 0.359814 0.400771 0.110683 -0.555841 1.786913 0.497850 -1.966617 0.701356 -0.472791

The MyClass class has a module property – a concept also used in C#. A property is a scalar class member variable that can either be set or retrieved. For example, m1 has been constructed with the default value of bBender = FALSE, however we can change it to TRUE easily

m1$Bender = TRUE
m1$print()

Now our object m1 behaves more like Bender when asked to do something 🙂

> m1$print()
Bite my shiny metal ass!

Hopefully this will help you to use Rcpp modules in your project. This is a great feature of Rcpp and really makes it even more powerful.

Share Button

An R/Rcpp mystery

This morning and today I spent almost four hours trying to deal with the fact that our R package DNAtools would not compile under Windows. The issue originated with a win-builder build which was giving errors like this:


I"D:/RCompile/recent/R/include" -DNDEBUG -I"d:/RCompile/CRANpkg/lib/3.4/Rcpp/include" -I"d:/Compiler/gcc-4.9.3/local330/include" -c DNTRare.cpp -o DNTRare.o
ID:/RCompile/recent/R/include: not found

and I replicated this (far too many times) on my Windows VM on my Mac.

In the end this boiled down to the presence of our Makevars file which contained only one line:


CXX_STD = CXX14

Deleting fixed the problem and it now compiles just fine. It compiles fine locally, and I am waiting for the response from the win-builder site. I do not anticipate an issue, but it would be useful to understand what was going wrong. I must admit that I have forgotten what aspects of the C++14 standard we are using, but I do know that changing line to


PKG_CXXFLAGS= -std=c++14

which I use in my multicool package gives me a different pain, with the compiler being unable to locate Rccp.h after seeing a #include directive.

Share Button

Extracting elements from lists in Rcpp

If you are an R programmer, especially one with a solid background in data structures or with experience in a more traditional object oriented environment, then you probably use lists to mimic the features you might expect from a C-style struct or a class in Java or C++. Retrieving information from a list of lists, or a list of matrices, or a list of lists of vectors is fairly straightforward in R, but you may encounter some compiler error messages in Rcpp if you do not take the right steps.

Stupid as bro

This will not be a very long article, but I think it is useful to have this information somewhere other than Stack Overflow. Two posts, one from Dirk and one from Romain contain the requisite information.

The List class does not know what type of elements it contains. You have to tell it. That means if you have something like

x = list(a = matrix(1:9, ncol = 3), b = 4)

in your R code and

void Test(List x){
  IntegerMatrix a = x["a"];
}

in your C++, then you might get a compiler error complaining about certain things not being overloaded. As Dirk points out in another post (which I cannot find right at this moment), the accessor operator for a List simply returns a SEXP. Rcpp has done a pretty good job of removing the need for us to get our hands dirty with SEXP‘s, but they are still there. If you know (and you should since you are the one writing the code and designing the data structures) that this SEXP actually is an IntegerMatrix then you should cast it as one using the as<T>() function. That is,

void Test(List x){
  IntegerMatrix a = as<IntegerMatrix>(x["a"]);
}

So why does this work?

If you look around the internet, you will see chunks of code like

int b = x["b"];
NumericVector y = x["y"];

which compile just fine. Why does this work? It works because the assignment operator has been overloaded for certain types in Rcpp, and so you will probably find you do not need explicit type coercion. However, it certainly will not hurt to explicitly do so for every assignment, and your code will benefit from doing so.

Share Button

Generating pseudo-random variates C++-side in Rcpp


It is well-known that if you are writing simulation code in R you can often gain a performance boost by rewriting parts of your simulation in C++. These days the easiest way to do that of course is to use Rcpp. Simulation usually depends on random variates, and usually great numbers of them. One of the issues that may arise is that your simulation needs to execute on the C++ side of things. For example, if you decide to programme your Metropolis-Hastings algorithm (not technically a simulation I know) in Rcpp, then you are going to need to be able to generate hundreds of thousands, if not millions, of random numbers. You can use Rcpp’s features to call R routines from within Rcpp to do this, e.g.

Function rnorm("rnorm");
rnorm(100, _["mean"] = 10.2, _["sd"] = 3.2 );

(Credit: Dirk Eddelbuettell)

but this has a certain overhead. C++ has had built-in in random number generation functionality since at least the C+11 standard (and probably since the C+0X standard). The random header file provides a Mersenne-Twister uniform random number generator (RNG), a Linear Congruential Generator (LCG), and a Subtract-with-Carry RNG. There is also a variety of standard distributions available, described here.

Uniform random variates

The ability to generate good quality uniform random variates is essential, and the mt19937 engine provides. The 19937 refers to the Mersenne Prime \((2^{19937}-1)\) that this algorithm is based on, and also to its period length. There are four steps required to generate uniform random variates. These are:

  1. Include the random header file
  2. Construct an mt19937 random number engine, and initialise it with a seed
  3. Construct a \(U(0,1)\) random number generator
  4. Use your engine and your uniform random number generator to draw variates

In code we would write

#include <random>
#include <Rcpp.h>

using namespace std;
using namespace Rcpp;

mt19937 mtEngine;
uniform_real_distribution<double> rngU;

//[[Rcpp::export]]
void setSeed(unsigned int seed){
  mtEngine = mt19937(seed);
  rngU = uniform_real_distribution<>(0.0, 1.0);
}

double runif(void){
  return rngU(mtEngine);
}

The function runif can now be called with runif(). Note that the setSet function has been exported so that you can initialize the RNG engine with a seed of your choice.

How about normal random variates?

It does not require very much more effort to add a normal RNG to your code. We simply add

normal_distribution<double> rngZ;

to our declared variables, and

//[[Rcpp::export]]
void setSeed(unsigned int seed){
  mtEngine = mt19937(seed);
  rngU = uniform_real_distribution<>(0.0, 1.0);
  rngZ = normal_distribution<double>(0.0, 1.0);
}

double rnorm(double mu = 0, double sigma = 1){
    return rngZ(mtEngine) * sigma + mu;
}

to our code base. Now rnorm can be called without arguments to get standard (\(N(0,1)\)) random variates, or with a mean, or a standard deviation, or both to get \(N(\mu,\sigma^2)\) random variates

Rcpp does it

No doubt someone is going to tell me that Romain and Dirk have thought of this already for you, and that my solution is unnecessary Morris Dancing. However, I think there is merit in knowing how to use the standard C++ libraries.

Please note that I do not usually advocate having global variables such as those in the code above. I would normally make mtEngine, rngU, and rngZ private member variables a class and then either instantiate it using an exported Rcpp function, or export the class and essential functions using an Rcpp module.

Working C++ code and an R test script can be found here in the RNG folder. Enjoy!

Share Button

Embracing the new. Is it time to ditch pointers in C++?

This post was originally posted December 22, 2014

Recently I had the opportunity to revisit some research that I did circa 1997-1998, because someone asked me to write a book chapter on the subject. It is an interesting process to go back and look at your old work and apply all the things that you have learned in the intervening time period.

In this case the research relied on some C/C++ simulation programmes that I had written. The simulations, even for small cases, performed hundreds of thousands of iterations to estimate lower bounds and so C++ was a natural choice at the time. R was still a fledgling, and Splus simply was not up to extensive simulation work. Given the nature of these simulations, I still do not think I would use R, even though it is very fast these days.

Simulations, being simulations, rely extensively on random number generation, and of course these programmes were no exception. Re-running the programmes seemed trivial, and of course the compute time had been substantially reduced over the years. This lead me to think that I could now explore some more realistic scenarios. If you, the reader, think I am being deliberately mysterious about my simulations, I am not. It is more that the actual research is a side issue to the problems I want to talk about here. The “more realistic inputs” simply correspond to larger simulated DNA databases, inline with those now maintained by many jurisdictions, and a set of allele frequencies generated from a much larger data set than that I had access to in 1997 with a different set of loci.

There would clearly be no story if something did not happen with the new work. My early work was with databases of 100, 400 and 1,000 individuals. When I expanded this to 5,000 and 10,000 individuals I found that things began to go wrong.

Firstly, the programme began to quit unexpectedly on Windows, and produce segmentation faults when compiled with gcc on Linux. The crashes only happened with the larger database sizes, but strangely in the case where N = 1,000 — where there had previously been no crash. I thought initially that this might be because I had inadvertently hard-coded some of the array dimensions, and that the new data sets, or larger runs where causing problems. Extensive examination of the code did not reveal any irregularities.
Random number generators and all that

I did discover fairly early on that I could no longer rely on George Marsaglia’s multiply-with-carry (MWC) uniform random number generator. The reason for this is that the generator, as coded, relies on integers of certain widths, and integer overflow, or wrapping. I had pretty much abandoned this some years ago when a former MSc student, Dr Alec Zwart discovered that there were irregularities in the distribution of the bits. Using a random number a bit at a time is very useful when simulating breeding populations — which is something else I do quite often.

The Mersenne Twister

The Mersenne Twister has been around since 1997, and again advances in computing have made the computing overhead it incurs relatively negligible. My initial switch to a Mersenne Twister uniform random number generator (RNG) was through an implementation distributed by Agner Fog. This implementation has served me well for quite some time, and I have used it extensively. It sadly was not the case this time. I could not get Visual Studio 2013 to understand some of the enums, and faking it caused me problems elsewhere. I am sure there is nothing wrong with this implementation, but I certainly could not get to to work this time.

I discovered by reading around on the web that random number generation has become part of the new C+11 standard, and that it is fairly easy to get a Mersenne Twister random number generator. Most implementations start with a uniform integer, or long integer, random number stream and then wrap different classes around this stream. C++ is no exception

	
#include <random>
 
using namespace std;
 
static mt19937 mtEngine;
uniform_real_distribution<double> rngU;;
 
void init_gen(unsigned int seed){
  mtEngine = mt19937(seed);
  rngU = uniform_real_distribution<>(0.0, 1.0);
}
 
double runif(void){
    return rngU(mtEngine);
}

I have used a static variable to store the stream in my implementation but there is no requirement to do this.

Nothing is ever normal

I have also, for quite some time, used Chris Wallace’s Fastnorm code for very fast generation of standard normal random variates. However, I found that this too appeared to be causing me problems, especially when I changed operating systems. My programming philosophy these days is that my work should really be portable to any mainstream operating system (Windows, Linux, OS X), especially since I almost never write GUI code any more. Running on both Windows and Linux is useful, because when I want to run really big simulations I often will flick the code over to our cluster which strangely enough does not run on Windows – who knew?

It turns out that the C+11 also has a normal random number generator. I have done very little research to find out what method is used, but my guess is that it is either an inverse CDF method, or at worst a Box-Muller based method. Adding a standard normal generator is easy

	
static mt19937 mtEngine;
static normal_distribution<double> rngZ;
 
void init_gen(unsigned int seed){
  mtEngine = mt19937(seed);
  rngZ = normal_distribution<double>(0.0, 1.0);
}
 
double snorm(void){
  return rngZ(mtEngine);
}

So that will work right?

After all of these changes, which do not seem substantial but bear in mind they took me a long time to get to them, everything was stable right? Well no, I was still getting a crash when N = 10,000, and this was not happening when I started the simulation with that case.
Java to the rescue

I decided, probably incorrectly with hindsight, that I must be making some sort of stupid mistake with allocating memory and releasing it. I decided to take that completely out of the equation by switching to Java. A port from C++ to Java is actually a relatively painless thing to do, and I had a working version of my programme in a couple of hours. This was made easier by the fact that my colleague Duncan Taylor had ported Ross Ihaka’s C code, ripped out of R, for gamma random number generation (yes I need that too), and with a little tweaking I had it running in my programme as well. The Java port let me recognize that I had done some silly things in my original programme, such as storing an entire bootstrap sample before processing it and in the process chewing up CPU and memory time with needless copying. And after a little more hacking (like three days) it ran to my satisfaction and all simulations duly completed with about three hours of run time.

But what about C++?

Java has some pretty cool ideas, and it is a fun and easy language to programme in. However, my failure to get the C++ working was weighing heavily on my mind. I like to think that I am a hell of a lot better C++ programmer than a Java programmer, and I dislike the idea that I might be writing slower programmes. I also do not think Java is currently well-suited to scientific programming. I am sure some readers will tell me this is no longer true, but access to a well accepted scientific code library is missing, and although there are many good projects, a lot of them are one-man efforts, or have been abandoned. A good example of the latter is the COLT library from CERN.

Out with pointers

I thought about this for sometime, and eventually it occurred to me that I could write a C++ programme that looked like a Java programme — that is, no pointers. C++ purists might shudder, but if you think of Java as simplified C++, then the concept is not so strange. Java treats every object argument in a function as being a reference. C++ can replicate this behaviour very easily by simply using its reference notation. The big trade-off was that I was going to also have to drop the pointers I used for dynamic allocation of memory. Java sort of fudges this as far as I can tell, because although the scalar types (int, double, boolean and others) are generally not treated as references, I think native arrays of them are, e.g. int[] or double[].

The STL

The Standard Template Library (STL) provides a set of low-overhead C++ template container classes, such as lists, vectors, maps and queues. These classes can contain themselves, and they can be dynamically resized at execution time. I have avoided using them in this way, especially when writing code to be very fast. However, I am fairly sure my colleague Brendon Brewer, who is much younger and a more modern C++ programmer, has told me that he never uses pointers. Given I had just finished for the year, this seemed like an ideal quick summer project.

Another couple of days recoding got me to running mode, and now it is time to reveal probably what was the issue all along. Remember when I said I did this:

	
double runif(void){
    return rngU(mtEngine);
}

What I actually did was this:

	
double runif(void){
    return mtEngine() / 4294967295.0;
}

The large constant there is $$2^{32}-1$$, the largest unsigned integer that can be represented on a 32-bit CPU. The mt19997 function mtEngine() returns an unsigned 32-bit integer, but for reasons that still escape me this piece of code:

	
return (int)floor(b * runif());

which should return a number between 0 and b-1 inclusive, was returning b, thereby causing the programme to address unallocated memory, and hence the crash. The reason it took so long to happen is that the random number stream had to be used for a very long time. Using the uniform_read_distribution class stopped this from happening.

What about performance?

So did I take a performance hit? I cannot say equivocally without going back to my original programme and adjusting the RNGs, but it appears that the C++ version actually takes about 10 minutes longer than the Java version. This is a very poor comparison, because the Java version is running on my Windows PC (core i7, 32GB RAM, HDD), and the C++ version is running on my Macbook (core i7, 16GB RAM, SSD), but also because the new C++ version is “more objected-oriented” than the Java version. That is, firstly I used native arrays, and arrays of arrays in Java, like int[][] and double[][]. If I had used Java ArrayLists (why would you), it might have been a different story. Secondly, there is a bit more OO design and architecture in the C++ version, including things like operator overloading and more extensive use of objects to represent the input data. All of these things cost, and they are probably costing a little too much in execution time, although they pay off in readability and usability, especially in well designed IDEs like Visual Studio and Xcode. Finally, my original C++ programme never finished, so I have no idea actually how long it would take to do the same set of simulations. I think in this case I will take a programme that works in three hours over a programme that quickly crashes in one.

Share Button