In physics, physical information refers generally to the information that is contained in a physical system. Its usage in quantum mechanics (i.e. quantum information) is important, for example in the concept of quantum entanglement to describe effectively direct or causal relationships between apparently distinct or spatially separated particles.
Information itself may be loosely defined as "that which can distinguish one thing from another".[citation needed] The information embodied by a thing can thus be said to be the identity of the particular thing itself, that is, all of its properties, all that makes it distinct from other (real or potential) things. It is a complete description of the thing, but in a sense that is divorced from any particular language.
When clarifying the subject of information, care should be taken to distinguish between the following specific cases:
The phrase instance of information refers to the specific instantiation of information (identity, form, essence) that is associated with the being of a particular example of a thing. (This allows for the reference to separate instances of information that happen to share identical patterns.)
A holder of information is a variable or mutable instance that can have different forms at different times (or in different situations).
A piece of information is a particular fact about a thing's identity or properties, i.e., a portion of its instance.
A pattern of information (or form) is the pattern or content of an instance or piece of information. Many separate pieces of information may share the same form. We can say that those pieces are perfectly correlated or say that they are copies of each other, as in copies of a book.
An embodiment of information is the thing whose essence is a given instance of information.
A representation of information is an encoding of some pattern of information within some other pattern or instance.
An interpretation of information is a decoding of a pattern of information as being a representation of another specific pattern or fact.
A subject of information is the thing that is identified or described by a given instance or piece of information. (Most generally, a thing that is a subject of information could be either abstract or concrete; either mathematical or physical.)
An amount of information is a quantification of how large a given instance, piece, or pattern of information is, or how much of a given system's information content (its instance) has a given attribute, such as being known or unknown. Amounts of information are most naturally characterized in logarithmic units.
The above usages are clearly all conceptually distinct from each other. However, many people insist on overloading the word "information" (by itself) to denote (or connote) several of these concepts simultaneously. (Since this may lead to confusion, this article uses more detailed phrases, such as those shown in bold above, whenever the intended meaning is not made clear by the context.)
Contents
1 Classical versus quantum information
2 Quantifying classical physical information
3 Physical information and entropy
4 Extreme physical information
5 See also
6 References
7 Further reading
Classical versus quantum information
The instance of information that is contained in a physical system is generally considered to specify that system's "true" state. (In many practical situations, a system's true state may be largely unknown, but a realist would insist that a physical system regardless always has, in principle, a true state of some sort—whether classical or quantum.)
When discussing the information that is contained in physical systems according to modern quantum physics, we must distinguish between classical information and quantum information. Quantum information specifies the complete quantum state vector (or equivalently, wavefunction) of a system, whereas classical information, roughly speaking, only picks out a definite (pure) quantum state if we are already given a prespecified set of distinguishable (orthogonal) quantum states to choose from; such a set forms a basis for the vector space of all the possible pure quantum states (see pure state). Quantum information could thus be expressed by providing (1) a choice of a basis such that the actual quantum state is equal to one of the basis vectors, together with (2) the classical information specifying which of these basis vectors is the actual one. (However, the quantum information by itself does not include a specification of the basis, indeed, an uncountable number of different bases will include any given state vector.)
Note that the amount of classical information in a quantum system gives the maximum amount of information that can actually be measured and extracted from that quantum system for use by external classical (decoherent) systems, since only basis states are operationally distinguishable from each other. The impossibility of differentiating between non-orthogonal states is a fundamental principle of quantum mechanics,[citation needed] equivalent to Heisenberg's uncertainty principle.[citation needed] Because of its more general utility, the remainder of this article will deal primarily with classical information, although quantum information theory does also have some potential applications (quantum computing, quantum cryptography, quantum teleportation) that are currently being actively explored by both theorists and experimentalists.[1]
Quantifying classical physical information
An amount of (classical) physical information may be quantified, as in information theory, as follows.[2] For a system S, defined abstractly in such a way that it has N distinguishable states (orthogonal quantum states) that are consistent with its description, the amount of information I(S) contained in the system's state can be said to be log(N). The logarithm is selected for this definition since it has the advantage that this measure of information content is additive when concatenating independent, unrelated subsystems; e.g., if subsystem A has N distinguishable states (I(A) = log(N) information content) and an independent subsystem B has M distinguishable states (I(B) = log(M) information content), then the concatenated system has NM distinguishable states and an information content I(AB) = log(NM) = log(N) + log(M) = I(A) + I(B). We expect information to be additive from our everyday associations with the meaning of the word, e.g., that two pages of a book can contain twice as much information as one page.
The base of the logarithm used in this definition is arbitrary, since it affects the result by only a multiplicative constant, which determines the unit of information that is implied. If the log is taken base 2, the unit of information is the binary digit or bit (so named by John Tukey); if we use a natural logarithm instead, we might call the resulting unit the "nat." In magnitude, a nat is apparently identical to Boltzmann's constant k or the ideal gas constant R, although these particular quantities are usually reserved to measure physical information that happens to be entropy, and that are expressed in physical units such as joules per kelvin, or kilocalories per mole-kelvin.
Physical information and entropy
An easy way to understand the underlying unity between physical (as in thermodynamic) entropy and information-theoretic entropy is as follows: Entropy is simply that portion of the (classical) physical information contained in a system of interest (whether it is an entire physical system, or just a subsystem delineated by a set of possible messages) whose identity (as opposed to amount) is unknown (from the point of view of a particular knower). This informal characterization corresponds to both von Neumann's formal definition of the entropy of a mixed quantum state (which is just a statistical mixture of pure states; see von Neumann entropy), as well as Claude Shannon's definition of the entropy of a probability distribution over classical signal states or messages (see information entropy).[2] Incidentally, the credit for Shannon's entropy formula (though not for its use in an information theory context) really belongs to Boltzmann, who derived it much earlier for use in his H-theorem of statistical mechanics.[3] (Shannon himself references Boltzmann in his monograph.[2])
Furthermore, even when the state of a system is known, we can say that the information in the system is still effectively entropy if that information is effectively incompressible, that is, if there are no known or feasibly determinable correlations or redundancies between different pieces of information within the system. Note that this definition of entropy can even be viewed as equivalent to the previous one (unknown information) if we take a meta-perspective, and say that for observer A to "know" the state of system B means simply that there is a definite correlation between the state of observer A and the state of system B; this correlation could thus be used by a meta-observer (that is, whoever is discussing the overall situation regarding A's state of knowledge about B) to compress his own description of the joint system AB.[4]
Due to this connection with algorithmic information theory,[5] entropy can be said to be that portion of a system's information capacity which is "used up," that is, unavailable for storing new information (even if the existing information content were to be compressed). The rest of a system's information capacity (aside from its entropy) might be called extropy, and it represents the part of the system's information capacity which is potentially still available for storing newly derived information. The fact that physical entropy is basically "used-up storage capacity" is a direct concern in the engineering of computing systems; e.g., a computer must first remove the entropy from a given physical subsystem (eventually expelling it to the environment, and emitting heat) in order for that subsystem to be used to store some newly computed information.[4]
Extreme physical information
Main article: Extreme physical information
In a theory developed by B. Roy Frieden,[6][7][8][9] "physical information" is defined as the loss of Fisher information that is incurred during the observation of a physical effect. Thus, if the effect has an intrinsic information level J but is observed at information level I, the physical information is defined to be the difference I − J. This defines an information Lagrangian. Frieden's principle of extreme physical information or EPI states that extremalizing I − J by varying the system probability amplitudes gives the correct amplitudes for most or even all physical theories. The EPI principle was recently proven.[10] It follows from a system of mathematical axioms of L. Hardy defining all known physics.[11]
Information itself may be loosely defined as "that which can distinguish one thing from another".[citation needed] The information embodied by a thing can thus be said to be the identity of the particular thing itself, that is, all of its properties, all that makes it distinct from other (real or potential) things. It is a complete description of the thing, but in a sense that is divorced from any particular language.
When clarifying the subject of information, care should be taken to distinguish between the following specific cases:
The phrase instance of information refers to the specific instantiation of information (identity, form, essence) that is associated with the being of a particular example of a thing. (This allows for the reference to separate instances of information that happen to share identical patterns.)
A holder of information is a variable or mutable instance that can have different forms at different times (or in different situations).
A piece of information is a particular fact about a thing's identity or properties, i.e., a portion of its instance.
A pattern of information (or form) is the pattern or content of an instance or piece of information. Many separate pieces of information may share the same form. We can say that those pieces are perfectly correlated or say that they are copies of each other, as in copies of a book.
An embodiment of information is the thing whose essence is a given instance of information.
A representation of information is an encoding of some pattern of information within some other pattern or instance.
An interpretation of information is a decoding of a pattern of information as being a representation of another specific pattern or fact.
A subject of information is the thing that is identified or described by a given instance or piece of information. (Most generally, a thing that is a subject of information could be either abstract or concrete; either mathematical or physical.)
An amount of information is a quantification of how large a given instance, piece, or pattern of information is, or how much of a given system's information content (its instance) has a given attribute, such as being known or unknown. Amounts of information are most naturally characterized in logarithmic units.
The above usages are clearly all conceptually distinct from each other. However, many people insist on overloading the word "information" (by itself) to denote (or connote) several of these concepts simultaneously. (Since this may lead to confusion, this article uses more detailed phrases, such as those shown in bold above, whenever the intended meaning is not made clear by the context.)
Contents
1 Classical versus quantum information
2 Quantifying classical physical information
3 Physical information and entropy
4 Extreme physical information
5 See also
6 References
7 Further reading
Classical versus quantum information
The instance of information that is contained in a physical system is generally considered to specify that system's "true" state. (In many practical situations, a system's true state may be largely unknown, but a realist would insist that a physical system regardless always has, in principle, a true state of some sort—whether classical or quantum.)
When discussing the information that is contained in physical systems according to modern quantum physics, we must distinguish between classical information and quantum information. Quantum information specifies the complete quantum state vector (or equivalently, wavefunction) of a system, whereas classical information, roughly speaking, only picks out a definite (pure) quantum state if we are already given a prespecified set of distinguishable (orthogonal) quantum states to choose from; such a set forms a basis for the vector space of all the possible pure quantum states (see pure state). Quantum information could thus be expressed by providing (1) a choice of a basis such that the actual quantum state is equal to one of the basis vectors, together with (2) the classical information specifying which of these basis vectors is the actual one. (However, the quantum information by itself does not include a specification of the basis, indeed, an uncountable number of different bases will include any given state vector.)
Note that the amount of classical information in a quantum system gives the maximum amount of information that can actually be measured and extracted from that quantum system for use by external classical (decoherent) systems, since only basis states are operationally distinguishable from each other. The impossibility of differentiating between non-orthogonal states is a fundamental principle of quantum mechanics,[citation needed] equivalent to Heisenberg's uncertainty principle.[citation needed] Because of its more general utility, the remainder of this article will deal primarily with classical information, although quantum information theory does also have some potential applications (quantum computing, quantum cryptography, quantum teleportation) that are currently being actively explored by both theorists and experimentalists.[1]
Quantifying classical physical information
An amount of (classical) physical information may be quantified, as in information theory, as follows.[2] For a system S, defined abstractly in such a way that it has N distinguishable states (orthogonal quantum states) that are consistent with its description, the amount of information I(S) contained in the system's state can be said to be log(N). The logarithm is selected for this definition since it has the advantage that this measure of information content is additive when concatenating independent, unrelated subsystems; e.g., if subsystem A has N distinguishable states (I(A) = log(N) information content) and an independent subsystem B has M distinguishable states (I(B) = log(M) information content), then the concatenated system has NM distinguishable states and an information content I(AB) = log(NM) = log(N) + log(M) = I(A) + I(B). We expect information to be additive from our everyday associations with the meaning of the word, e.g., that two pages of a book can contain twice as much information as one page.
The base of the logarithm used in this definition is arbitrary, since it affects the result by only a multiplicative constant, which determines the unit of information that is implied. If the log is taken base 2, the unit of information is the binary digit or bit (so named by John Tukey); if we use a natural logarithm instead, we might call the resulting unit the "nat." In magnitude, a nat is apparently identical to Boltzmann's constant k or the ideal gas constant R, although these particular quantities are usually reserved to measure physical information that happens to be entropy, and that are expressed in physical units such as joules per kelvin, or kilocalories per mole-kelvin.
Physical information and entropy
An easy way to understand the underlying unity between physical (as in thermodynamic) entropy and information-theoretic entropy is as follows: Entropy is simply that portion of the (classical) physical information contained in a system of interest (whether it is an entire physical system, or just a subsystem delineated by a set of possible messages) whose identity (as opposed to amount) is unknown (from the point of view of a particular knower). This informal characterization corresponds to both von Neumann's formal definition of the entropy of a mixed quantum state (which is just a statistical mixture of pure states; see von Neumann entropy), as well as Claude Shannon's definition of the entropy of a probability distribution over classical signal states or messages (see information entropy).[2] Incidentally, the credit for Shannon's entropy formula (though not for its use in an information theory context) really belongs to Boltzmann, who derived it much earlier for use in his H-theorem of statistical mechanics.[3] (Shannon himself references Boltzmann in his monograph.[2])
Furthermore, even when the state of a system is known, we can say that the information in the system is still effectively entropy if that information is effectively incompressible, that is, if there are no known or feasibly determinable correlations or redundancies between different pieces of information within the system. Note that this definition of entropy can even be viewed as equivalent to the previous one (unknown information) if we take a meta-perspective, and say that for observer A to "know" the state of system B means simply that there is a definite correlation between the state of observer A and the state of system B; this correlation could thus be used by a meta-observer (that is, whoever is discussing the overall situation regarding A's state of knowledge about B) to compress his own description of the joint system AB.[4]
Due to this connection with algorithmic information theory,[5] entropy can be said to be that portion of a system's information capacity which is "used up," that is, unavailable for storing new information (even if the existing information content were to be compressed). The rest of a system's information capacity (aside from its entropy) might be called extropy, and it represents the part of the system's information capacity which is potentially still available for storing newly derived information. The fact that physical entropy is basically "used-up storage capacity" is a direct concern in the engineering of computing systems; e.g., a computer must first remove the entropy from a given physical subsystem (eventually expelling it to the environment, and emitting heat) in order for that subsystem to be used to store some newly computed information.[4]
Extreme physical information
Main article: Extreme physical information
In a theory developed by B. Roy Frieden,[6][7][8][9] "physical information" is defined as the loss of Fisher information that is incurred during the observation of a physical effect. Thus, if the effect has an intrinsic information level J but is observed at information level I, the physical information is defined to be the difference I − J. This defines an information Lagrangian. Frieden's principle of extreme physical information or EPI states that extremalizing I − J by varying the system probability amplitudes gives the correct amplitudes for most or even all physical theories. The EPI principle was recently proven.[10] It follows from a system of mathematical axioms of L. Hardy defining all known physics.[11]
0 comments:
Post a Comment