CStringFile Class

Environment: VC5/6, NT4, CE 2.11

Once upon a day I was asked to write an application which could do some serious filtering, on a ';' seperatedfile which held multiple collumns. The initial file was about 7 Mb in size, so I had to create a program which could read a file, process the read lines and produce a new output file.
So you think, what's the big deal ?
When I wrote that program, a pentium 133 was considered a fast PC. As far as I knew, there where no generic (Microsoft provided ??) solutions for a simple task as reading text from a file. So I build my own textfile reading class.
This brings me to the part which I found most interesting, the first version of this filtering program, did the job in several seconds.
So how come the thing worked so fast ? By the optimal manner of reading a text file.

The StringFile class itself consists of 2 loops. One loop is for filling a read buffer, and the other loop is used for reading a line from this buffer. The effect of these loops is that when the file is read, its is done by reading 2k of data per turn. Further processing (finding where any given line starts and when it stops) is done inside memory, not on disk. And, as you can probably known, memory is faster then disk...so there's my explanation for the speed of the filter program.
After some fiddling around with this class I decided to post this, so that everyone can enjoy this piece of code.

This sample show just how easy this thing works, open, read and close. What do you want more ??

#include "StringFile.h"

BOOL ReadTextFile(LPCSTR szFile)
{
 CStringFile 	sfText;
 CString		szLine;
 BOOL		bReturn = FALSE;	

 // When the given file can be opened
 if(sfText.Open(szFile)) 
 {
  // Read all the lines (one by one)
  while(sfText.GetNextLine(szLine)!=0)
  {
   printf("%s\r\n",szLine);	//And print them
  }
  sfText.Close(); // Close the opened file
  bReturn = TRUE; // And say where done succesfully
 }
 return bReturn;
}

Some benchmarking (trying to find optimum blocksize for reading) gave me the following results:

Blocksize Benchmark

This shows that the optimum size for this piece of code lies around 2k blocksize. Increasing this blocksize doesn't speedup reading, the only thing speeding up the read actions is probably improving the used code.

Downloads

Download source - 3 Kb


Comments

Leave a Comment
  • Your email address will not be published. All fields are required.

Top White Papers and Webcasts

  • Live Event Date: December 11, 2014 @ 1:00 p.m. ET / 10:00 a.m. PT Market pressures to move more quickly and develop innovative applications are forcing organizations to rethink how they develop and release applications. The combination of public clouds and physical back-end infrastructures are a means to get applications out faster. However, these hybrid solutions complicate DevOps adoption, with application delivery pipelines that span across complex hybrid cloud and non-cloud environments. Check out this …

  • With the average hard drive now averaging one terabyte in size, the fallout from the explosion of user-created data has become an overwhelming volume of potential evidence that law-enforcement and corporate investigators spend countless hours examining. Join Us and SANS' Rob Lee for our 45-minute webinar, A Triage and Collection Strategy for Time-Sensitive Investigations, will demonstrate how to: Identify the folders and files that often contain key insights Reduce the time spent sifting through content by …

Most Popular Programming Stories

More for Developers

RSS Feeds