Ok, I'm a quick learner so I really don't have a programming related question, I was reading through the GI Course book 1 and figured I'd write a little program that would tell me the size of each variable type independently from what platform I'm working on, however I get to this point which does not make sense:
[img]http://img22.imageshack.us/img22/7351/wtfqju.jpg[/img]
Also, messy code because I couldn't be bothered to write functions to do it since I haven't learned functions yet other than a quick read on what they do:
[code]// Simple program to list the size of a variable type on the specific machine the code is compiled on
#include <iostream>
#include <cmath>
#include <limits>
using namespace std;
int main()
{
typedef std::numeric_limits<float> fl;
typedef std::numeric_limits<double> dl;
typedef std::numeric_limits<long double> ldl;
cout << "Size of char: \n"
<< "- " << sizeof(char) << " byte(s).\n"
<< "- " << sizeof(char)*8 << " bit(s).\n"
<< "- " << pow((double)2, (double)(sizeof(char)*8)) << " possible values.\n\n";
cout << "Size of signed short: \n"
<< "- " << sizeof(signed short) << " byte(s).\n"
<< "- " << sizeof(signed short)*8 << " bit(s).\n"
<< "- " << pow((double)2, (double)(sizeof(signed short)*8)) << " possible values.\n"
<< " " << "(ranging from -" << ((pow((double)2, (double)(sizeof(signed short)*8)))/2) << " to " << ((pow((double)2, (double)(sizeof(signed short)*8)))/2)-1 << ")\n\n";
cout << "Size of unsigned short: \n"
<< "- " << sizeof(unsigned short) << " byte(s).\n"
<< "- " << sizeof(unsigned short)*8 << " bit(s).\n"
<< "- " << pow((double)2, (double)(sizeof(unsigned short)*8)) << " possible values.\n"
<< " " << "(ranging from 0 to " << pow((double)2, (double)(sizeof(unsigned short)*8))-1 << ")\n\n";
cout << "Size of signed int: \n"
<< "- " << sizeof(signed int) << " byte(s).\n"
<< "- " << sizeof(signed int)*8 << " bit(s).\n"
<< "- " << pow((double)2, (double)(sizeof(signed int)*8)) << " possible values.\n"
<< " " << "(ranging from -" << ((pow((double)2, (double)(sizeof(signed int)*8)))/2) << " to (" << ((pow((double)2, (double)(sizeof(signed int)*8)))/2)-1 << ")-1)\n\n";
cout << "Size of unsigned int: \n"
<< "- " << sizeof(unsigned int) << " byte(s).\n"
<< "- " << sizeof(unsigned int)*8 << " bit(s).\n"
<< "- " << pow((double)2, (double)(sizeof(unsigned int)*8)) << " possible values.\n"
<< " " << "(ranging from 0 to (" << pow((double)2, (double)(sizeof(unsigned int)*8)) << ")-1)\n\n";
cout << "Size of signed long: \n"
<< "- " << sizeof(signed long) << " byte(s).\n"
<< "- " << sizeof(signed long)*8 << " bit(s).\n"
<< "- " << pow((double)2, (double)(sizeof(signed long)*8)) << " possible values.\n"
<< " " << "(ranging from -" << ((pow((double)2, (double)(sizeof(signed long)*8)))/2) << " to (" << ((pow((double)2, (double)(sizeof(signed long)*8)))/2)-1 << ")-1)\n\n";
cout << "Size of unsigned long: \n"
<< "- " << sizeof(unsigned long) << " byte(s).\n"
<< "- " << sizeof(unsigned long)*8 << " bit(s).\n"
<< "- " << pow((double)2, (double)(sizeof(unsigned long)*8)) << " possible values.\n"
<< " " << "(ranging from 0 to (" << pow((double)2, (double)(sizeof(unsigned long)*8)) << ")-1)\n\n";
cout << "Size of signed long long: \n"
<< "- " << sizeof(signed long long) << " byte(s).\n"
<< "- " << sizeof(signed long long)*8 << " bit(s).\n"
<< "- " << pow((double)2, (double)(sizeof(signed long long)*8)) << " possible values.\n"
<< " " << "(ranging from -" << ((pow((double)2, (double)(sizeof(signed long long)*8)))/2) << " to (" << ((pow((double)2, (double)(sizeof(signed long long)*8)))/2)-1 << ")-1)\n\n";
cout << "Size of unsigned long long: \n"
<< "- " << sizeof(unsigned long long) << " byte(s).\n"
<< "- " << sizeof(unsigned long long)*8 << " bit(s).\n"
<< "- " << pow((double)2, (double)(sizeof(unsigned long long)*8)) << " possible values.\n"
<< " " << "(ranging from 0 to (" << pow((double)2, (double)(sizeof(unsigned long long)*8)) << ")-1)\n\n";
cout << "Size of float (Floating Point Number): \n"
<< "- " << sizeof(float) << " byte(s)\n"
<< "- " << sizeof(float)*8 << " bit(s).\n"
<< "- " << (sizeof(float)*8) - (fl::digits + fl::digits10) << " unused bit(s).\n"
<< "---- " << fl::digits << " bit(s) before the comma.\n"
<< "---- " << fl::digits10 << " bit(s) after the comma.\n\n";
cout << "Size of double (Floating Point Number): \n"
<< "- " << sizeof(double) << " byte(s)\n"
<< "- " << sizeof(double)*8 << " bit(s).\n"
<< "- " << ((int)((sizeof(double)*8) - (dl::digits + dl::digits10)) < 0 ? -(int)((sizeof(double)*8) - (dl::digits + dl::digits10)) : (int)((sizeof(double)*8) - (dl::digits + dl::digits10))) << ((int)((sizeof(double)*8) - (dl::digits + dl::digits10)) < 0 ? " too many bit(s).\n" : " unused bit(s).\n")
<< "---- " << dl::digits << " bit(s) before the comma.\n"
<< "---- " << dl::digits10 << " bit(s) after the comma.\n\n";
cout << "Size of long double (Floating Point Number): \n"
<< "- " << sizeof(long double) << " byte(s)\n"
<< "- " << sizeof(long double)*8 << " bit(s).\n"
<< "- " << (sizeof(long double)*8) - (ldl::digits + ldl::digits10) << " unused bit(s).\n"
<< "---- " << ldl::digits << " bit(s) before the comma.\n"
<< "---- " << ldl::digits10 << " bit(s) after the comma.\n\n";
}
[/code]
Is numeric_limits calculating something wrong or is it me doing something wrong? I doubt it's sizeof causing it.
I just can't figure out why some have unused bits or too many bits from what they are to supposed to have.
(code in question are the last 3 cout blocks at the end)
Thanks in advance.
digits/digits10 is the amount of digits, not bits.
I'm not sure what this fl::digits crap is, but it's wrong. There are no unused bits in any numeric type. You don't calculate how many bits there are and what they are used for, it is just how the spec is defined. Furthermore, floats of any type don't have a "number of bits before and after comma" (By comma, I assume you mean radix point) The reason it is called 'floating point' is because this radix point is not in any strict position. Read up on the well-defined IEEE-1985 floating point standard on Wikipedia or something.
Alright guys, I remembered seeing in a diagram that floats and doubles had a specific amount of bits assigned for before and after the radix point, also I was getting confused because I read some guys on another forum saying that was the way to check which bits were being assigned to what.
Anyways, I'll look into that. Thanks.
In all floating-point implementations, each bit has a fixed function assigned to it. You have 3 fields. Sign, exponent, and mantissa (or significand) The radix point for the mantissa is always at the left side of the mantissa field, (and there is an implied 1 on the other side [except for denormal numbers] that doesn't actually get recorded in memory) The exponent field just tells us by how many bits to adjust that radix point to produce the actual represented magnitude. Sign is only a single bit and tells us if the number is positive or negative.
The formula for calculating the value of a normalized floating point number is as follows:
(-1)^(sign) * (1.[mantissa] * 2^[exponent])
We can only imply the 1 on the left side of the radix point because this is binary. The most significant bit in the sequence is guaranteed to be a 1.
Sorry, you need to Log In to post a reply to this thread.