To my readers: Alex is a fifth grader, eleven years old kid on his mid way to be a Visual Basic programmer; hopefully I can motivate him to get into the Java / Oracle duo. Alex has in mind to be a game programmer, no doubt he will reach his goal!
Integer: The integers are the set of numbers consisting of the natural numbers and their negatives. They are numbers that can be written without a fractional or decimal component, and fall within the set {... −2, −1, 0, 1, 2, ...}. For example, 65, 7, and −756 are integers; 1.6 and 1½ are not integers. In other terms, integers are the numbers you can count with items such as apples or your fingers, and their negatives, including 0.
Short Integer: a short integer is a data type that can represent a positive or negative whole number whose range is less than or equal to that of a standard integer on the same machine. Although there is no global standard, it is common for a short integer to either be exactly half the size, or the same size as a standard integer (in the same context). In this latter case, use of the word 'short' is technically redundant, but may be used to indicate that it is not a long integer.
A variable defined as a short integer in one programming language may be different in size to a similarly defined variable in another. In some languages this size is fixed across platforms, while in others it is machine dependent. In some languages this data type does not exist at all.
Long Integer: is a data type that can represent a positive or negative whole number whose range is greater than or equal to that of a standard integer on the same machine.
In practice it is usual for a long integer to require double the storage capacity of a standard integer, although this is not always the case.
A variable defined as a long integer in one programming language may be different in size to a similarly defined variable in another. In some languages this size is fixed across platforms, in others it is machine dependent. In some languages this data type does not exist at all.
Real: This is a data type used by computers programs to represent an approximation to a real number, because real numbers are not countable computers cannot represent them exactly using a finite amount of information. Most often the computer uses a reasonable approximation.
Blobs: A binary large object, also known as a blob, is a collection of binary data stored as a single entity in a database management system. Blobs are typically images, audio or other multimedia objects, though sometimes binary executable code is stored as a blob. Database support for blobs is not universal.
The data type and definition was introduced to describe data not originally defined in traditional computer database systems but became possible when disk space became cheap.
Fixed point Arithmetic: It is as well a real (numbers with decimals) data type for a number that has a fixed numbers of digits after the decimal point (and sometimes also before). They are useful for representing fractional values usually on base 2 (binary) or base 10. See my example below for more details.
Floating point: To make it easy for you let's say that floating point is the way the computer represent numbers that contain decimals. In other words floating point describes a numerical representation system in which a string of digits represents a real number.
The advantage of floating-point representation over fixed point (or integer) representation is that it can support a much wider range of values. For example, a fixed-point representation that has eight decimal digits, with the decimal point assumed to be positioned after the sixth digit, can represent the numbers 123456.78, 8765.43, 123.00, and so on, whereas a floating-point representation with eight decimal digits could also represent 1.2345678, 1234567.8, 0.000012345678, 12345678000000000, and so on.
Single Precision Floating Point: Single-precision values can contain decimal points and have a range of +/- 8.43*10^-37 to 3.40*10^38.
While Single-precision numbers can represent both enormous and microscopic values, they are limited to six digits of precision. In other words, Single-precision does a good job with figures like $451.21 and $6,411.92, but $671,421.22 cannot be represented exactly because it contains too many digits. Neither can 234.56789 or 0.00123456789. A Single-precision representation will come as close as it can in six digits: $671,421, or 234.568, or 0.00123457. Depending on your application, this rounding off can be a trivial or crippling deficiency.
Double Precision Floating Point: Double-precision floating-point numbers are to Single precision numbers what Long integers are to Integers They take twice as much space in memory (8 bytes versus 4 bytes), but have a greater range (+/- 4.19*10^-307 to 1.79*10^308) and a greater accuracy (15 to 16 digits of precision versus the 6 digits of Single-precision). A Double-precision, 5,000-element array requires 40,000 bytes. An Integer array with the same number of elements occupies only 10,000 bytes.
Short floating-point: is of the representation of smallest fixed precision provided by an implementation.
Long floating-point number: is of the representation of the largest fixed precision provided by an implementation.
Intermediate: between short and long formats are two others, arbitrarily called single and double.
The precise definition of these categories is implementation-dependent. However, the rough intent is that short floating-point numbers be precise to at least four decimal places (but also have a space-efficient representation); single floating-point numbers, to at least seven decimal places; and double floating-point numbers, to at least fourteen decimal places.
Source: different websites over the internet but mostly Wikipedia.
No comments:
Post a Comment