Home

Awesome

Fast C++ CSV Parser

This is a small, easy-to-use and fast header-only library for reading comma separated value (CSV) files.

Features

Getting Started

The following small example should contain most of the syntax you need to use the library.

# include "csv.h"

int main(){
  io::CSVReader<3> in("ram.csv");
  in.read_header(io::ignore_extra_column, "vendor", "size", "speed");
  std::string vendor; int size; double speed;
  while(in.read_row(vendor, size, speed)){
    // do stuff with the data
  }
}

Installation

The library only needs a standard conformant C++11 compiler. It has no further dependencies. The library is completely contained inside a single header file and therefore it is sufficient to copy this file to some place on your include path. The library does not have to be explicitly build.

Note however, that threads are used and some compiler (for example GCC) require you to link against additional libraries to make it work. With GCC it is important to add -lpthread as the last item when linking, i.e. the order in

g++ -std=c++0x a.o b.o -o prog -lpthread

is important. If you for some reason do not want to use threads you can define CSV_IO_NO_THREAD before including the header.

Remember that the library makes use of C++11 features and therefore you have to enable support for it (f.e. add -std=c++0x or -std=gnu++0x).

The library was developed and tested with GCC 4.6.1

Documentation

The library provides two classes:

Note that everything is contained in the io namespace.

LineReader

class LineReader{
public:
  // Constructors
  LineReader(some_string_type file_name);
  LineReader(some_string_type file_name, std::FILE*source);
  LineReader(some_string_type file_name, std::istream&source);
  LineReader(some_string_type file_name, std::unique_ptr<ByteSourceBase>source);

  // Reading
  char*next_line();

  // File Location
  // (These only affect the content of the error message)
  void set_file_line(unsigned);
  unsigned get_file_line()const;
  void set_file_name(some_string_type file_name);
  const char*get_truncated_file_name()const;
};

The constructor takes a file name and optionally a data source. If no data source is provided the function tries to open the file with the given name and throws an error::can_not_open_file exception on failure. If a data source is provided then the file name is only used to format error messages. In that case you can essentially put any string there. Using a string that describes the data source results in more informative error messages.

some_string_type can be a std::string or a char*. If the data source is a std::FILE* then the library will take care of calling std::fclose. If it is a std::istream then the stream is not closed by the library. For best performance open the streams in binary mode. However using text mode also works. ByteSourceBase provides an interface that you can use to implement further data sources.

class ByteSourceBase{
public:
  virtual int read(char*buffer, int size)=0;
  virtual ~ByteSourceBase(){}
};

The read function should fill the provided buffer with at most size bytes from the data source. It should return the number of bytes actually written to the buffer. If data source has run out of bytes (because for example an end of file was reached) then the function should return 0. If a fatal error occurs then you can throw an exception. Note that the function can be called both from the main and the worker thread. However, it is guaranteed that they do not call the function at the same time.

Lines are read by calling the next_line function. It returns a pointer to a null terminated C-string that contains the line. If the end of file is reached a null pointer is returned. The newline character is not included in the string. You may modify the string as long as you do not write past the null terminator. The string stays valid until the destructor is called or until next_line is called again. Windows and *nix newlines are handled transparently. UTF-8 BOMs are automatically ignored and missing newlines at the end of the file are no problem.

Important: There is a limit of 2^24-1 characters per line. If this limit is exceeded a error::line_length_limit_exceeded exception is thrown.

Looping over all the lines in a file can be done in the following way.

LineReader in(...);
while(char*line = in.next_line()){
  ...
}

The remaining functions are mainly used used to format error messages. The file line indicates the current position in the file, i.e., after the first next_line call it is 1 and after the second 2. Before the first call it is 0. The file name is truncated as internally C-strings are used to avoid std::bad_alloc exceptions during error reporting.

Note: It is not possible to exchange the line termination character.

CSVReader

CSVReader uses policies. These are classes with only static members to allow core functionality to be exchanged in an efficient way.

template<
  unsigned column_count,
  class trim_policy = trim_chars<' ', '\t'>, 
  class quote_policy = no_quote_escape<','>,
  class overflow_policy = throw_on_overflow,
  class comment_policy = no_comment
>
class CSVReader{
public:
  // Constructors
  // same as for LineReader

  // Parsing Header
  void read_header(ignore_column ignore_policy, some_string_type col_name1, some_string_type col_name2, ...);
  void set_header(some_string_type col_name1, some_string_type col_name2, ...);
  bool has_column(some_string_type col_name)const;

  // Read
  char*next_line();
  bool read_row(ColType1&col1, ColType2&col2, ...);

  // File Location 
  void set_file_line(unsigned);
  unsigned get_file_line()const;
  void set_file_name(some_string_type file_name);
  const char*get_truncated_file_name()const;
};

The column_count template parameter indicates how many columns you want to read from the CSV file. This must not necessarily coincide with the actual number of columns in the file. The three policies govern various aspects of the parsing.

The trim policy indicates what characters should be ignored at the begin and the end of every column. The default ignores spaces and tabs. This makes sure that

a,b,c
1,2,3

is interpreted in the same way as

  a, b,   c
1  , 2,   3

The trim_chars can take any number of template parameters. For example trim_chars<' ', '\t', '_'> is also valid. If no character should be trimmed use trim_chars<>.

The quote policy indicates how string should be escaped. It also specifies the column separator. The predefined policies are:

Important: When combining trimming and quoting the rows are first trimmed and then unquoted. A consequence is that spaces inside the quotes will be conserved. If you want to get rid of spaces inside the quotes, you need to remove them yourself.

Important: Quoting can be quite expensive. Disable it if you do not need it.

Important: Quoted strings may not contain unescaped newlines. This is currently not supported.

The overflow policy indicates what should be done if the integers in the input are too large to fit into the variables. There following policies are predefined:

The comment policy allows to skip lines based on some criteria. Valid predefined policies are:

Examples:

The constructors and the file location functions are exactly the same as for LineReader. See its documentation for details.

There are three methods that deal with headers. The read_header methods reads a line from the file and rearranges the columns to match that order. It also checks whether all necessary columns are present. The set_header method does not read any input. Use it if the file does not have any header. Obviously it is impossible to rearrange columns or check for their availability when using it. The order in the file and in the program must match when using set_header. The has_column method checks whether a column is present in the file. The first argument of read_header is a bit field that determines how the function should react to column mismatches. The default behavior is to throw an error::extra_column_in_header exception if the file contains more columns than expected and an error::missing_column_in_header when there are not enough. This behavior can be altered using the following flags.

When using ignore_missing_column it is a good idea to initialize the variables passed to read_row with a default value, for example:

// The file only contains column "a"
CSVReader<2>in(...);
in.read_header(ignore_missing_column, "a", "b");
int a,b = 42;
while(in.read_row(a,b)){
  // a contains the value from the file
  // b is left unchanged by read_row, i.e., it is 42
}

If only some columns are optional or their default value depends on other columns you have to use has_column, for example:

// The file only contains the columns "a" and "b"
CSVReader<3>in(...);
in.read_header(ignore_missing_column, "a", "b", "sum");
if(!in.has_column("a") || !in.has_column("b"))
  throw my_neat_error_class();
bool has_sum = in.has_column("sum");
int a,b,sum;
while(in.read_row(a,b,sum)){
  if(!has_sum)
    sum = a+b;
}

Important: Do not call has_column from within the read-loop. It would work correctly but significantly slowdown processing.

If two columns have the same name an error::duplicated_column_in_header exception is thrown. If read_header is called but the file is empty a error::header_missing exception is thrown.

The next_line functions reads a line without parsing it. It works analogous to LineReader::next_line. This can be used to skip broken lines in a CSV file. However, in nearly all applications you will want to use the read_row function.

The read_row function reads a line, splits it into the columns and arranges them correctly. It trims the entries and unescapes them. If requested the content is interpreted as integer or as floating point. The variables passed to read_row may be of the following types.

Note that there is no inherent overhead to using char* and then interpreting it compared to using one of the parsers directly build into CSVReader. The builtin number parsers are pure convenience. If you need a slightly different syntax then use char* and do the parsing yourself.

FAQ

Q: The library is throwing a std::system_error with code -1. How to get it to work?

A: Your compiler's std::thread implementation is broken. Define CSV_IO_NO_THREAD to disable threading support.

Q: My values are not just ints or strings. I want to parse my customized type. Is this possible?

A: Read a char* and parse the string. At first this seems expensive but it is not as the pointer you get points directly into the memory buffer. In fact there is no inherent reason why a custom int-parser realized this way must be any slower than the int-parser build into the library. By reading a char* the library takes care of column reordering and quote escaping and leaves the actual parsing to you. Note that using a std::string is slower as it involves a memory copy.

Q: I get lots of compiler errors when compiling the header! Please fix it. :(

A: Have you enabled the C++11 mode of your compiler? If you use GCC you have to add -std=c++0x to the command line. If this does not resolve the problem, then please open a ticket.

Q: The library crashes when parsing large files! Please fix it. :(

A: When using GCC have you linked against -lpthread? Read the installation section for details on how to do this. If this does not resolve the issue then please open a ticket. (The reason why it only crashes only on large files is that the first chuck is read synchronous and if the whole file fits into this chuck then no asynchronous call is performed.) Alternatively you can define CSV_IO_NO_THREAD.

Q: Does the library support UTF?

A: The library has basic UTF-8 support, or to be more precise it does not break when passing UTF-8 strings through it. If you read a char* then you get a pointer to the UTF-8 string. You will have to decode the string on your own. The separator, quoting, and commenting characters used by the library can only be ASCII characters.

Q: Does the library support string fields that span multiple lines?

A: No. This feature has been often requested in the past, however, it is difficult to make it work with the current design without breaking something else.

Q: Can this library handle a variable number of columns?

A: You can read a compile-time known constant number of columns from a file with a variable number of columns. Which columns will be read depends on the strings in the header line. There is no way to read a variable number of columns. You can think of the provided functionality as a SQL select col1,col2,col3 from my_file.csv statement and the CSV file as table. You can change the number of columns in the table without affecting the result of the select as long as the queried columns remain.