Hottest Forum Q&A on CodeGuru - February 2nd - 2004

Introduction:

Lots of hot topics are covered in the Discussion Forums on CodeGuru. If you missed the forums this week, you missed some interesting ways to solve a problem. Some of the hot topics this week include:


How do I handle cross-referencing includes? (top)

Thread:

mankeyrabbit needs to know whether is it possible to cross reference includes. Is it?

This may sound a noob question, but say I have two files, a.h 
and b.h. a.h contains a class, a, and b.h contains a class, b.
class a has a member of type b, and vice versa for b. How would I
arrange the include statements so that it compiles correctly?

So, did you know whether this is possible or not?

Well, you can not do this directly. One solution is to use a third header file, f.e: defs.h, that has forward declaration for the classes. However, that presupposes that you are using pointers to classes. Or, put a forward declaration in each file. For that, take a look at the VC++ FAQ.


Is this a costly operation? (top)

Thread:

the one wants to know whether de-referencing of a large array can be costly operation or not.

There is function like:
MyFunction( T* apElements, int anCount );
As it can be seen it accepts an array as argument.
In my code, I have a pointer to array ( MyObject** ), and I need to
pass this array to the MyFunction.

If I write like the following in my code, does it becomes a costly
operation? I mean, I do de-reference 'a pointer to array' to
'an array'; and the size of the array is very large (e.g. 100.000).
In this case, how does 'de-referencing' behave? Does it dereference
all the thousands of objects or something else?
CMain::OnOk()
{
    MyObject** pMyObjs;
    // create it...
    // initialize them..
    //...
    MyFunction( *pMyObject, 100000 );    // is this de-referencing
                                         // costly?
}

No. T* as a type is just a pointer to T; it's not an array of T. So, you have a T**, which is a pointer to a pointer of T. You dereference it and this will become only a pointer to T. So, the dereference operation only works on the pointer; hence, it is quite fast.


What should a iter->begin() and iter->end() return? (top)

Thread:

bluesource has implemented his own stack. Now, he needs to know what his iter function begin() and end() should return.

I've been developing my own stack implementation. data is my main
storage mechanism:
template<class T> class Stack
{
private:
    T *data;
public:
    typedef T* iter;
};
I've supplied all the functionality I need for the stack except for
a class iterator to loop through the stack's values. I'm looking to
do something like this:
Stack<int> s;
s.push(1);
s.push(2);
s.push(4);
for(Stack<int>::iter it=s.begin();it!=s.end();it++)
     cout << *it << endl;
What should I have the begin() and end() functions return? An
iterator is basically a pointer to the stack data, right? I'm stuck.
inline iter begin()
{
    iter front= ...    //????
    return front;
}
Am I on the right track?

Yes, you are on the right track. The iterator can be implemented as a pointer. The begin() should point to the first item in the stack, amd end() points to the item *after* the last available item. So, you need a dummy end value so that

it != s.end();

works correctly.

Here is how the functions should be implemented:

inline iter begin()
{
    return data-size;
}

inline iter end()
{
    return data;
}

Besides that, take a look at the article from YvesM—Custom iterator class.


What is the most important factor in a certain algorithm? (top)

Thread:

Charleston has read many posts and searched a lot, but could not find the most important factor in an algorithm.

I have read many posts I searched on these forums about algorithms,
and had a look at some materials about how to analyze some simple
algorithms. I, however, still do not understand how I can determine
the most important factor in a certain algorithm. Especially in a
case where there are many loops and many different computations in
those loops. How will I know which one I have to choose among them,
then?

Can anyone please help me?

Each algorithm has it's own time complexity (which is the most important parameter of an algorithm). Unfortunately, what usally happens is that faster algorithms are harder to implement. Of course, your selection may depend on several things.

For example, if you want to sort an array of 100 numbers, you don't have to use the fastest algorithm because even an algorithm of O(n^2) (too slow) complexity would execute in some mseconds. You shouldn't bother implementing a faster algorithm (eg O(nlogn)). On the other hand, if your array size is some millions, probably you should.

As I said, sometimes faster algorithms are harder to implement. For example, recursive algorithms are usually slower but easier to implement. So, generally you have to balance between time complexity (and maybe the complexity of memory) and level of implementation difficulty. Read the whole thread to read what others say to that topic.




About the Author

Sonu Kapoor

Sonu Kapoor is an ASP.NET MVP and MCAD. He is the owner of the popular .net website http://dotnetslackers.com. DotNetSlackers publishs the latest .net news and articles - it contains forums and blogs as well. His blog can be seen at: http://dotnetslackers.com/community/blogs/sonukapoor/

Comments

  • There are no comments yet. Be the first to comment!

Leave a Comment
  • Your email address will not be published. All fields are required.

Top White Papers and Webcasts

  • Instead of only managing projects organizations do need to manage value! "Doing the right things" and "doing things right" are the essential ingredients for successful software and systems delivery. Unfortunately, with distributed delivery spanning multiple disciplines, geographies and time zones, many organizations struggle with teams working in silos, broken lines of communication, lack of collaboration, inadequate traceability, and poor project visibility. This often results in organizations "doing the …

  • The exponential growth of data, along with virtualization, is bringing a disruptive level of complexity to your IT infrastructure. Having multiple point solutions for data protection is not the answer, as it adds to the chaos and impedes on your ability to deliver consistent SLAs. Read this white paper to learn how a more holistic view of the infrastructure can help you to unify the data protection schemas by properly evaluating your business needs in order to gain a thorough understanding of the applications …

Most Popular Programming Stories

More for Developers

Latest Developer Headlines

RSS Feeds