CStringFile Class

Environment: VC5/6, NT4, CE 2.11

Once upon a day I was asked to write an application which could do some serious filtering, on a ';' seperatedfile which held multiple collumns. The initial file was about 7 Mb in size, so I had to create a program which could read a file, process the read lines and produce a new output file.
So you think, what's the big deal ?
When I wrote that program, a pentium 133 was considered a fast PC. As far as I knew, there where no generic (Microsoft provided ??) solutions for a simple task as reading text from a file. So I build my own textfile reading class.
This brings me to the part which I found most interesting, the first version of this filtering program, did the job in several seconds.
So how come the thing worked so fast ? By the optimal manner of reading a text file.

The StringFile class itself consists of 2 loops. One loop is for filling a read buffer, and the other loop is used for reading a line from this buffer. The effect of these loops is that when the file is read, its is done by reading 2k of data per turn. Further processing (finding where any given line starts and when it stops) is done inside memory, not on disk. And, as you can probably known, memory is faster then disk...so there's my explanation for the speed of the filter program.
After some fiddling around with this class I decided to post this, so that everyone can enjoy this piece of code.

This sample show just how easy this thing works, open, read and close. What do you want more ??

#include "StringFile.h"

BOOL ReadTextFile(LPCSTR szFile)
{
 CStringFile 	sfText;
 CString		szLine;
 BOOL		bReturn = FALSE;	

 // When the given file can be opened
 if(sfText.Open(szFile)) 
 {
  // Read all the lines (one by one)
  while(sfText.GetNextLine(szLine)!=0)
  {
   printf("%s\r\n",szLine);	//And print them
  }
  sfText.Close(); // Close the opened file
  bReturn = TRUE; // And say where done succesfully
 }
 return bReturn;
}

Some benchmarking (trying to find optimum blocksize for reading) gave me the following results:

Blocksize Benchmark

This shows that the optimum size for this piece of code lies around 2k blocksize. Increasing this blocksize doesn't speedup reading, the only thing speeding up the read actions is probably improving the used code.

Downloads

Download source - 3 Kb


Comments

Leave a Comment
  • Your email address will not be published. All fields are required.

Top White Papers and Webcasts

  • The 2014 State of DevOps Report — based on a survey of 9,200+ people in IT operations, software development and technology management roles in 110 countries — reveals: Companies with high-performing IT organizations are twice as likely to exceed their profitability, market share and productivity goals. IT performance improves with DevOps maturity, and strongly correlates with well-known DevOps practices. Job satisfaction is the No. 1 predictor of performance against organizational …

  • Data integrity and ultra-high performance dictate the success and growth of many companies. One of these companies is BridgePay Network Solutions, a recently launched and rapidly growing financial services organization that allows merchants around the world to process millions of daily credit card transactions. Due to the nature of their business, their IT team needed to strike the perfect balance between meeting regulatory-mandated data security measures with the lowest possible levels of latency and …

Most Popular Programming Stories

More for Developers

RSS Feeds

Thanks for your registration, follow us on our social networks to keep up-to-date