The data types are defined as follows:

**L**- A logical value consists of an ASCII ``T'' indicating true or ``F'' indicating false. A null character (zero byte) signifies an invalid value.
**X**- A bit array starts in the most significant bit of the byte,
and the subsequent bits are in the order of decreasing significance in
the byte. A bit array field in a binary table consists of an integral
number of bytes with those bits that follow the array set to zero. No
specific null value is prescribed for bit arrays, but the following
three conventions are suggested:
- Designate one bit of the array to be a validity bit.
- Add a type L field to the table to indicate the validity of the bit array.
- Add a second bit array which contains a validity bit for each of the bits of the original array.

Use of any of these conventions will be a decision of an individual project or particular group of FITS users. Do not expect that general software to read FITS will necessarily be able to interpret them.

**B**- An unsigned 8-bit integer has the bits in decreasing order of significance. By applying scaling, this field may be used to store quantities whose physical values are signed.
**I**- A 16-bit integer is a twos-complement integer with the bits in decreasing order of significance.
**J**- A 32-bit integer is a twos-complement integer with the bits in decreasing order of significance.
**A**- Character strings consist of 8-bit ASCII characters in their
natural order. An ASCII NULL (hexadecimal
`00`) character may be used to terminate a character string before the length specified by the repeat count is reached. Strings occupying the full length of the field are not NULL terminated. An ASCII NULL as the first character signifies a NULL (undefined) string. The printable ASCII characters, that is, those in the range hexadecimal 20-7E, and the ASCII NULL after the last valid character are the only ones permitted. **E**- Single precision floating point values are in IEEE 32-bit precision format, as described in section 3.1.2.3.
**D**- Double precision floating point values are in IEEE 64-bit precision format, as described in section 3.1.2.3.
**C**- A complex value is composed of a pair of IEEE 32-bit floating point values: the first is the real part and the second the imaginary part.
**M**- A double precision complex value is composed of a pair of IEEE 64-bit floating point values: the first is the real part and the second the imaginary part.
**P**- An array descriptor consists of two 32-bit twos-complement integer values.

For the floating point types--E, D, C, and M--the IEEE NaN values represent undefined or null values; a value with all bits set is recognized as a NaN. All IEEE special values are recognized: infinity, NaN, and denormalized numbers. If either the real or imaginary part of a complex value contains a NaN, the entire complex value is regarded as invalid.

Alignment may be a problem when binary tables are read. Suppose the sequence (1I, 1E) is read directly from the I/O buffer. Some machines may require that the number of bytes between the start of this sequence and the start of a 4-byte floating point number in it be evenly divisible by 4. In that case, the floating point number will not begin at a proper location, and a data alignment error will result. If alignment is important, the data should be copied from the I/O buffer into an aligned buffer before they are read.