Optimizing Tip on Adaptive Arithmetic Coding

The arithmetic coding data compression method is very well-known. It has a number of implementations used in of popular complex, multistaged compression techniques. This article, therefore, assumes that the reader will be readers with the basic concepts of arithmetic coding.

This article describes an optimization point I found in the most commonly used adaptive Witten-Neal-Cleary implementation (also known as "the finite-precision algorithm"). For details (including source code samples), you can refer to the Mark Nelson's article on arithmetic coding. Below I'm using the common terminology that article also follows.

In the practical implementations I've seen, the cumulative symbol frequency table (a component of the algorithm) was represented by an array of integer numbers containing, for each given symbol, the sum of frequencies of all the symbols having indices less than given. The array was sorted in arbitrary fashion (for instance, by "values" of symbols) or by (non-cumulative) frequencies of symbols. Encoding a regular symbol needs corresponding frequency interval to be determined; decoding needs determining a frequency interval containing a given point. Both actions are followed by updating the adaptive model, what's commonly done by increasing the non-cumulative symbol's frequency (= increasing the cumulative frequencies of all the symbols having indices not less than the one of the symbol encoded/decoded). The first thing is fast, the second can be done fast enough by binary search in the array (which is monotone; I've noted it's not done in the sample I've referenced above), but the third requires a considerable number of increments to perform. Sorting the array by frequencies gives effect only for small arrays (= small total number of symbols) and EXTREMELY non-uniform distribution of symbols.

So I've suggested to use a tree-like representation of the frequency information (stored in an array). Assume we totally have 8 symbols : 0, 1, ..., 7 and their cumulative frequencies Q[0] = 0, Q[1], ... Q[7], Q[8] = (the sum of all non-cumulative frequencies). We can break the whole frequency interval (0, Q[8]) with Q[4] into 2 subintervals (of lengths Q[4] and Q[8] - Q[4]), then do the same with the intervals obtained (using Q[2] and Q[6]), and so on. Thus we obtain a binary tree of "breaking points", each node of which represents breaking of an interval into the two. Let the node contain the breaking location counting from the lower bound of an interval. If we now store the resulting tree in an array using the indices of breaking points (as Q[] elements) in the Q[] array, we will obtain the following array A[]:
Q[0] = 0, Q[1], Q[2], Q[3] - Q[1], Q[4], Q[5] - Q[4], Q[6] - Q[4], Q[7] - Q[6] (, Q[8]).
Now, obtaining interval bounds and updating the model while encoding/decoding can be done parallelly. To pass a given symbol while encoding, binary-search it's index in the "0, 1, ..., 7, 8" array (starting to compare with "8", then "4" ...) with increasing A[k] when going left the node "k". To pass a given point while decoding, binary-search it in A[] with subtracting A[k] from the point when going right the node "k" (and increasing A[k] if going to the left). Initial state of A[] is not hard to determine (in our case, it is "0, 1, 2, 1, 4, 1, 2, 1, 8"), and the overflow preventing (halving the frequencies) is also trivial. Q[8] can be stored in A[0] instead of A[8]. I'm attaching a code shortcut containing 'CFreqModIncr' class implementing all this functionality (sorry, but I didn't have time to make up a complete project demonstrating it :) (maybe someone will do it "for me and others" ? ;).
// *** A short sample.

// Construct the model.
CFreqModIncr model;

// Initialize to keep 11 symbols and 
// set up the increment.
model.Initialize(11);
model.dwIncrement = 1;

// To pass a symbol with index 'dwIndex' 
// while encoding, use :
DWORD dwLeft, dwRight, dwTotal = model.GetTotalFrequency();
model.PassByIndex(dwIndex, dwLeft, dwRight);
... // ... and encode the (dwLeft/dwTotal,
... // dwRight/dwTotal) interval

// To decode a symbol, obtain a point 
// 'dwPoint' in frequency units, then use :
DWORD dwLeft, dwRight, dwTotal = model.GetTotalFrequency();
DWORD dwIndex = model.PassByPoint(dwPoint, dwLeft, dwRight);
... // dwIndex = index of the symbol 
... // decoded; other 'dw's are to be
// used in the rest of decoding process ...

model.Uninitialize();

Downloads

Download source - 2 Kb


Comments

  • More concessions with herveleger, more basin upon!

    Posted by tradecomewqm on 04/28/2013 06:55pm

    birdclassmateturtle-dovesteadyverifiedrep

    Reply
  • Huffmann compression algorithm

    Posted by Legacy on 02/08/2003 12:00am

    Originally posted by: sinu ommen

    hi,
    
    i need the implemetation of the huffmann's compression algorithm.can u send me the explanation and its implementaion in vc of the huffmann compression algorithm

    Reply
  • Patents?

    Posted by Legacy on 08/09/2002 12:00am

    Originally posted by: daran

    Question: AFAIK arithmetic coding is covered by more
    than a dozen patents, are they outdated or didn't
    anyone bother to mention that fact here?

    From news:comp.compression/faq:
    ------------------------
    - IBM holds many patents on arithmetic coding
    (4,122,440 4,286,256 4,295,125
    4,463,342 4,467,317 4,633,490 4,652,856 4,792,954
    4,891,643 4,901,363
    4,905,297 4,933,883 4,935,882 5,045,852 5,099,440
    5,142,283 5,210,536
    5,414,423 5,546,080).
    ...
    See also below details on many other patents on
    arithmetic coding (4,973,961
    4,989,000 5,023,611 5,025,258 5,272,478 5,307,062
    5,309,381 5,311,177
    5,363,099 5,404,140 5,406,282 5,418,532). The list is
    not exhaustive.
    ------------------------
    So it seems arithmetic coding is only of academic
    interest, till at least the majority of these has had their
    time, you are working for IBM(TM, i suppose:-) or you
    are working for a company with a seriously big budget
    (either aquisition or lawyers).

    Have a nice day, otherwise,
    Daran

    Reply
  • Lexics...

    Posted by Legacy on 03/20/2001 12:00am

    Originally posted by: Ivan V. Borsevici

    "EXTREMELY non-uniform distribution of symbols"
    
    

    Just high entropy text.
    Did the author even seen such text as potentially compressable?

    Reply
Leave a Comment
  • Your email address will not be published. All fields are required.

Top White Papers and Webcasts

  • Protecting business operations means shifting the priorities around availability from disaster recovery to business continuity. Enterprises are shifting their focus from recovery from a disaster to preventing the disaster in the first place. With this change in mindset, disaster recovery is no longer the first line of defense; the organizations with a smarter business continuity practice are less impacted when disasters strike. This SmartSelect will provide insight to help guide your enterprise toward better …

  • Live Event Date: August 14, 2014 @ 2:00 p.m. ET / 11:00 a.m. PT Data protection has long been considered "overhead" by many organizations in the past, many chalking it up to an insurance policy or an extended warranty you may never use. The realities of today makes data protection a must-have, as we live in a data-driven society -- the digital assets we create, share, and collaborate with others on must be managed and protected for many purposes. Check out this upcoming eSeminar and join Seagate Cloud …

Most Popular Programming Stories

More for Developers

Latest Developer Headlines

RSS Feeds