HiveBrain v1.2.0
Get Started
← Back to all entries
patterncppMinor

Normalize integer types to float range

Submitted by: @import:stackexchange-codereview··
0
Viewed 0 times
normalizefloatrangetypesinteger

Problem

I wrote two template functions to take an integer type and output a float in the given range. One function takes signed integers and the other takes unsigned integers.

The functions are used to normalize audio PCM data into a range that an encoder is expecting, such as [0.0f, 1.0f]. The source audio data can come in many forms (signed, unsigned, 1 byte, 2 bytes, etc) so I thought template functions would be ideal for this.

The caller has the responsibility of calling the function with an appropriately sized type containing the sample to normalize (e.g. int16_t). The functions use a lookup-table to determine the max/min values the type can contain based on the size of the integer type.

util.cc

#include 

const uintmax_t MAX_VALUE_PER_BYTES_UNSIGNED[] = {
    0,
    uintmax_t(UINT8_MAX),
    uintmax_t(UINT16_MAX),
    uintmax_t(UINT16_MAX) + uintmax_t(UINT8_MAX),
    uintmax_t(UINT32_MAX),
    uintmax_t(UINT32_MAX) + uintmax_t(UINT8_MAX),
    uintmax_t(UINT32_MAX) + uintmax_t(UINT16_MAX),
    uintmax_t(UINT32_MAX) + uintmax_t(UINT16_MAX) + uintmax_t(UINT8_MAX),
    uintmax_t(UINT64_MAX)
};

const intmax_t MAX_VALUE_PER_BYTES_SIGNED[] = {
    0,
    intmax_t(INT8_MAX),
    intmax_t(INT16_MAX),
    intmax_t(INT16_MAX) + intmax_t(INT8_MAX),
    intmax_t(INT32_MAX),
    intmax_t(INT32_MAX) + intmax_t(INT8_MAX),
    intmax_t(INT32_MAX) + intmax_t(INT16_MAX),
    intmax_t(INT32_MAX) + intmax_t(INT16_MAX) + intmax_t(INT8_MAX),
    intmax_t(INT64_MAX)
};

const intmax_t MIN_VALUE_PER_BYTES_SIGNED[] = {
    0,
    intmax_t(INT8_MIN),
    intmax_t(INT16_MIN),
    intmax_t(INT16_MIN) + intmax_t(INT8_MIN),
    intmax_t(INT32_MIN),
    intmax_t(INT32_MIN) + intmax_t(INT8_MIN),
    intmax_t(INT32_MIN) + intmax_t(INT16_MIN),
    intmax_t(INT32_MIN) + intmax_t(INT16_MIN) + intmax_t(INT8_MIN),
    intmax_t(INT64_MIN)
};


util.h

```
#include
#include
#include

extern const uintmax_t MAX_VALUE_PER_BYTES_UNSIGNED[];
extern const intmax_t MAX_VALUE_PER_BYTES_SIGNED[];

Solution

` already provides a set of limits on values that can be stored in the various types. It's also a template, so it's easy to invoke on a template parameter. As a quick demo of that particular part of things:

template 
void showminmax(T) { 
    std::cout ::min() ::max();
}

int main() { 
     std::cout << "int\n";
     showminmax(1);

     std::cout << "\nunsigned long long\n";
     showminmax(1ULL);
}


I believe the code to normalize the number to the range [0..1] can be simplified quite a bit as well, to something on this order:

template 
float normalize(T t) {
    static_assert(std::is_integral::value, "Input must be integral");
    float min = std::numeric_limits::min();
    float max = std::numeric_limits::max();
    float range = max - min;

    return (t - min) / range;
}


This should work for either signed or unsigned types. Here's a quick bit of demo code to exercise it, and show the results:

int main() {
    std::cout ::min()) ::min() >> 1)) ::max() >> 1)) ::max()) ::min()) ::min() >> 1) ::max() >> 1) ::max()) ::max() / 4) ::max() / 2) ::max()) ::max() / 4) ::max() / 2) ::max()) << "\n";
}


Note that twos complement is asymmetric, so the values produced are technically just a tiny bit off (well, off from what you might expect, anyway). This will normally be lost in the rounding for any type with more than ~24 input bits, but if the input type is smaller than that (e.g., most implementations of
char and short) a signed value of 0` won't give precisely 0 as the result (unless you have a machine with something like signed-magnitude or ones complement int, anyway).

Code Snippets

template <class T>
void showminmax(T) { 
    std::cout << "min: " << std::numeric_limits<T>::min() << 
               "\nmax: " << std::numeric_limits<T>::max();
}

int main() { 
     std::cout << "int\n";
     showminmax(1);

     std::cout << "\nunsigned long long\n";
     showminmax(1ULL);
}
template <class T>
float normalize(T t) {
    static_assert(std::is_integral<T>::value, "Input must be integral");
    float min = std::numeric_limits<T>::min();
    float max = std::numeric_limits<T>::max();
    float range = max - min;

    return (t - min) / range;
}
int main() {
    std::cout << "char(min):   " << normalize(std::numeric_limits<char>::min()) << "\n";
    std::cout << "char (1/4):  " << normalize(char(std::numeric_limits<char>::min() >> 1)) << "\n";
    std::cout << "char (0):    " << normalize('\0') << "\n";
    std::cout << "char (3/4):  " << normalize(char(std::numeric_limits<char>::max() >> 1)) << "\n";
    std::cout << "char (max):  " << normalize(std::numeric_limits<char>::max()) << "\n\n";

    std::cout << "int(min):   " << normalize(std::numeric_limits<int>::min()) << "\n";
    std::cout << "int (1/4):  " << normalize(std::numeric_limits<int>::min() >> 1) << "\n";
    std::cout << "int (0):    " << normalize(0) << "\n";
    std::cout << "int (3/4):  " << normalize(std::numeric_limits<int>::max() >> 1) << "\n";
    std::cout << "int (max):  " << normalize(std::numeric_limits<int>::max()) << "\n\n";

    std::cout << "uint (0):   " << normalize(0U) << "\n";
    std::cout << "uint (1/4): " << normalize(std::numeric_limits<unsigned int>::max() / 4) << "\n";
    std::cout << "uint (mid): " << normalize(std::numeric_limits<unsigned int>::max() / 2) << "\n";
    std::cout << "uint (max): " << normalize(std::numeric_limits<unsigned int>::max()) << "\n\n";


    std::cout << "ULL (0):   " << normalize(0ULL) << "\n";
    std::cout << "ULL (1/4): " << normalize(std::numeric_limits<unsigned long long>::max() / 4) << "\n";
    std::cout << "ULL (mid):  " << normalize(std::numeric_limits<unsigned long long>::max() / 2) << "\n";
    std::cout << "ULL (max):  " << normalize(std::numeric_limits<unsigned long long>::max()) << "\n";
}

Context

StackExchange Code Review Q#131268, answer score: 3

Revisions (0)

No revisions yet.