July 07, 2016

Constructive Empiricism - An introduction to Scientific Antirealism

(Abstracted from the Stanford University Online Library)

Copyright © 2014 by The Metaphysics Research Lab, Stanford University.

Constructive Empiricism:

Constructive empiricism is the version of scientific anti-realism promulgated by Bas van Fraassen in his famous book The Scientific Image (1980).

Van Fraassen defines the view as follows:

Science aims to give us theories which are empirically adequate; and acceptance of a theory involves as belief only that it is empirically adequate. (1980, 12)

A theory is empirically adequate exactly if what it says about the observable things and events in the world is true — exactly if it ‘saves the phenomena.’ (van Fraassen 1980, 12)

To understand the above account, one needs first to appreciate the difference between the syntactic view of scientific theories and van Fraassen's preferred semantic view of scientific theories.

On the syntactic view, a theory is given by an enumeration of theorems, expressed in some one particular language.

In contrast, on the semantic view, a theory is given by the specification of a class of structures (describable in various languages) that are the theory's models (the determinate structures of which the theory holds true).

As van Fraassen says,

To present a theory is to specify a family of structures, its models; and secondly, to specify certain parts of those models (the empirical substructures) as candidates for the direct representation of observable phenomena. (1980, 64)

A theory is empirically adequate, then, if appearances — “the structures which can be described in experimental and measurement reports” (1980, 64) — are isomorphic to the empirical substructures of some model of the theory.

Roughly speaking, the theory is empirically adequate if the observable phenomena can “find a home” within the structures described by the theory — that is to say, the observable phenomena can be “embedded” in the theory.

The constructive empiricist rejects arguments that suggest that one is rationally obligated to believe in the truth of a theory, given that one believes in the empirical adequacy of the theory.

For this epistemological argument to work, the distinction between empirical adequacy and truth has to be well-founded.


Constructive empiricism is a view which stands in contrast to the type of scientific realism that claims the following:

Science aims to give us, in its theories, a literally true story of what the world is like; and acceptance of a scientific theory involves the belief that it is true. (van Fraassen 1980, 8)

In contrast, the constructive empiricist holds that science aims at truth about observable aspects of the world, but that science does not aim at truth about unobservable aspects.

Acceptance of a theory, according to constructive empiricism, correspondingly differs from acceptance of a theory on the scientific realist view: the constructive empiricist holds that as far as belief is concerned, acceptance of a scientific theory involves only the belief that the theory is empirically adequate.

Dr. Michela Massimi is a Ph.D from the London School of Economics, and a senior lecturer of philosophy at the University of Edinburgh.

In the following video, she explains Constructive Empiricism very nicely:

Terms used by Dr. Massimi:

1. "Scientific realism":
Is a positive epistemic attitude towards the content of our best theories and models, recommending belief in both observable and unobservable aspects of the world described by the sciences.

2. "Scientific Anti-realism":
In philosophy of science, anti-realism applies chiefly to claims about the non-reality of "unobservable" entities such as electrons or genes, which are not detectable with human senses.

3. "Epistemology":
Relating to knowledge or to the degree of its validation.

4. "Ontology":
The philosophical study of the nature of being, becoming, existence, or reality, as well as the basic categories of being and their relations.

5. "Empirical Adequacy":
Roughly speaking, if a theory works in practical life, it is called empirically adequate.

6. "Semantic aspect":
Semantics is the study of meaning.

7. "Syntactic aspect":
Syntax, or the study of structure.

December 13, 2015

A commercial Quantum Computer


A quick look at commercial quantum computers. These computers have human like intelligence, and are a totally different ball game from the Bill Gates type of computer.

Currently manufactured by a Canadian startup named D-Wave but others will follow soon.

D-Wave's quantum computer can hold in its "digital mind", possibilities that exceed the number of particles in the whole observable universe!

So if you gave such a computer a chess situation, or any real world issue, it would be able to ponder a number of relevant possibilities that exceeds the total number of particles in the whole observable universe!

Applications would be traffic control, air traffic control, weather predictions....political strategy. War strategy. Very long list.

D-Wave's current machine can ponder 21000 possibilities simultaneously.

That's 2 multiplied by itself 1000 times. Larger than the number of particles in the observable universe.

© D-Wave Systems:

Jun 22, 2015

D-Wave Systems Breaks the 1000 Qubit Quantum Computing Barrier

New Milestone Will Enable System to Address Larger and More Complex Problems

Palo Alto, CA - June 22, 2015 - D-Wave Systems Inc., the world's first quantum computing company, today announced that it has broken the 1000 qubit barrier, developing a processor about double the size of D-Wave’s previous generation and far exceeding the number of qubits ever developed by D-Wave or any other quantum effort.

At 1000 qubits, the new processor considers 21000 possibilities simultaneously, a search space which dwarfs the 2512 possibilities available to the 512-qubit D-Wave Two.

In fact, the new search space contains far more possibilities than there are ‪particles in the observable universe.

Formidable power expanding very rapidly....the quantum computer reduces your Bill Gates type of computer to the status of a bullock cart, in certain applications.

Let us take a quick look at what makes a Quantum Computer tick. As explained by the founder of D-Wave.


1. Everytime you add these Qubits, you double the number of the way I think this is, the shadows of these parallel worlds, overlap with ours, and if we are smart enough, we can dive into these parallel worlds, grab their resources and pull them back into ours.

So what is he talking about, when he talks of parallel worlds?

He is referring to the MWI, or the many worlds interpretation of Quantum Physics:

Many-worlds interpretation of Quantum Physics:

This interpretation implies that all possible alternate histories and futures [of anything] are real, each representing an actual "world" (or "universe").

The hypothesis states there is a very large—perhaps infinite number of universes, and everything that could possibly have happened in our past, but did not, has occurred in the past of some other universe or universes.

MWI is one of many multiverse hypotheses in physics and philosophy. It is currently considered a mainstream interpretation along with others.

Before many-worlds, reality had always been viewed as a single unfolding history. Many-worlds, however, views reality as a many-branched tree, wherein every possible quantum outcome is realised. Many-worlds reconciles the observation of non-deterministic events, such as random radioactive decay, with the fully deterministic equations of quantum physics.


Bloch Sphere representation of a QubitThe basic digit of a quantum computer is a QUBIT. It is a VECTOR, while the basic digit used in a conventional Bill Gates type of computer is a SCALAR.

Via Wikipedia:

Consider first a classical computer that operates on a three-bit register. The state of the computer at any time is a probability distribution over the 23=8 different three-bit strings 000, 001, 010, 011, 100, 101, 110, 111. If it is a deterministic computer, then it is in exactly one of these states with probability 1.

However, if it is a probabilistic computer, then there is a possibility of it being in any one of a number of different states. We can describe this probabilistic state by eight non-negative numbers A,B,C,D,E,F,G,H (where A = is the probability that the computer is in state 000, B = is the probability that the computer is in state 001, etc.). There is a restriction that these probabilities sum to 1.

The state of a three-qubit quantum computer is similarly described by an eight-dimensional vector (a,b,c,d,e,f,g,h), called a ket. Here, however, the coefficients can have complex values, and it is the sum of the squares of the coefficients' magnitudes, |a|2+|b|2+....+|h|2, that must equal 1. These squared magnitudes represent the probability of each of the given states. However, because a complex number encodes not just a magnitude but also a direction in the complex plane, the phase difference between any two coefficients (states) represents a meaningful parameter. This is a fundamental difference between quantum computing and probabilistic classical computing.

D-Wave has a 1000 qubit commercially successful quantum computer.

August 16, 2013

All Theories are Unprovable & Improbable

"Any physical theory is always provisional, in the sense that it is only a hypothesis: you can never prove it. No matter how many times the results of an experiment agree with some theory, you can never be sure that the next time the result will not contradict the theory." - Robert Pirsig.

And here's mathematician Benoit Mandelbrot's concepts, summarised at Wikipedia:

"Mandelbrot described both the "Noah effect" (in which sudden discontinuous changes can occur) and the "Joseph effect" (in which persistence of a value can occur for a while, yet suddenly change afterwards." - Wikipedia.

Mandelbrot says •persistence of a value can occur for a while, yet suddenly change afterwards.•

This would apply to natural constants too. Your speed of light, the universal constant for gravitation, and all other universal constants - they may change. Gradually, even suddenly.

Nature may not be following any "unchangeable laws". There is precision and repetition in nature certainly, but it may not be following any unchangeable laws.

In fact, in 1934, Karl Popper argued that the mathematical probability of all theories, scientific or pseudo scientific, given any amount of evidence, is zero.

"Why theories are unprovable & improbable"
by Marc O'Brien

A. Why all theories are unprovable:

New­ton's theory of gravitation says that every particle of matter in the universe attracts every other particle with a force according to an inverse square law.

Newton's theory is a universal generalization that ap­plies to every particle of matter, anywhere in the universe, at any time. But however numerous they might be, our observations of planets, falling bodies, and projectiles concern only a finite number of bodies during finite amounts of time.

So the scope of Newton's theory vastly exceeds the scope of the evidence. It is possible that all our observations are correct, and yet Newton's theory is false because some bodies not yet observed violate the inverse square law.

Since "All Fs are G" cannot be deduced from "Some Fs are G," it cannot be true that Newton's theory can be proven by logically deducing it from the evidence.

As Lakatos points out, this prevents us from claiming that scientific theories, unlike pseudo-scientific theories, can be proven from observational facts. The truth is that no theory can be deduced from such facts. All theories are unprovable, scientific and unscientific alike.


B. Why all theories are improbable:

While conceding that scientific theories cannot be proven, most people still believe that theories can be made more probable by evidence.

Lakatos follows Popper in denying that any theory can be made probable by any amount of evidence. Popper's argument for this controversial claim rests on the analysis of the objective probability of statements given by inductive logicians.

Consider a card randomly drawn from a standard deck of fifty-two cards. What is the probability that the card selected is the ten of hearts?

Obviously, the answer is 1/52. There are fifty-two possibilities, each of which is equally likely and only one of which would render true the statement "This card is the ten of hearts."

Now consider a scientific theory that, like Newton's theory of gravitation, is universal.

The number of things to which Newton's theory applies is, presumably, infinite. Imagine that we name each of these things by numbering them 1, 2, 3, . . . , n, . . .

There are infinitely many ways the world could be, each equally probable.

1 obeys Newton's theory, but none of the others do.

1 and 2 obey Newton's theory, but none of the others do.

1, 2, and 3 obey Newton's theory, but none of the others do.

All bodies (1, 2, 3, . . . , n, . . . ) obey Newton's theory.

Since these possibilities are infinite in number, and each of them has the same probability, the probability of any one of them must be 0. But only one, the last one, represents the way the world would be if Newton's theory were true. So the probability of Newton's theory (and any other universal generalization) must be ZERO.

Now one might think that, even if the initial probability of a theory must be ZERO, the probability of the theory when it has been confirmed by evidence will be greater than ZERO. As it turns out, the probability calculus denies this.

Let our theory be T, and let our evidence for T be E.

We are interested in P(T/E), the probability of T given our evidence E. Bayes's theorem (which follows logically from the axioms of the probability calculus) tells us that this probability is:

P(T/E) = P(E/T) x P(T)/P(E)

If the initial probability of T, that is P(T), is ZERO, then P(T/E) must also be ZERO. Thus, no theory can increase in objective probability, regardless of the amount of evidence for it. For this reason, Lakatos joins Popper in regarding all theories, whether scientific or not, as equally unprovable and equally improbable.

Why all theories are unprovable Part 2
by Marc O'Brien

We cannot study the big bang, if even there was one - what we have to do is infer from the hypothetical big bang to what the universe might look like if the big bang happened, microwave background radiation and gravitational waves and so on, and if we find those things then we consider the big bang to be a theory and no longer a hypothesis.

However, no theory can ever be proved and certainly no single observation can be expected to prove a whole theory. It is literally illogical to ask for proof of a theory. All proofs of a theory commit the fallacy of affirming the consequent. That is "If P then Q, Q, ergo P".

Instead theories are explanations of all the observed facts taken holistically - theories offer explanations for why it is that all these facts seen as relevant hang together the way they do.

One might ask for proof that the earth rotates and orbits the sun. Another might answer that all one needs to do is observe the sun rising in the morning. But a more simple explanation of the sun rising in the morning would be that the sun orbits the earth. By this we see that no single observation could ever prove a theory. There are always other possible explanations for singular observations. Theories must instead explain all the facts combined - they must account for all the observation facts under one umbrella.

So what happens when, like the big bang hypothesis, we make a god hypothesis? Well, we ask ourselves what the world and universe would look like if such a hypothesis got it right.

But when we do look to the world and universe we find that, of all the explanations for the way the universe and world are, the least best explanation is that there is or was a god. There are other better explanations. And only the ignorant or irrational would follow the less better explanations.

But again - only those ignorant of logic and the theory of knowledge would ask for a proof for anything outside of logic and math. It is literally illogical to ask for proof of a theory.

All Theories are unprovable part 3
by Marc O'Brien

Theories invoke confirmations out of the logical form "If P then Q" where an affirmation of Q cannot guarantee P.

Theorems on the other hand invoke confirmations out of the logical form "Iff P then Q" where an affirmation of Q unequivocally also establishes P. Theorems are confined to areas analytic - math and logic - they do not also cover theories (explanations).

Scientific theories are not theorems but rather are explanations and most explanations actually can't even be arguments anyway.

It is literally illogical and a major misunderstanding of science and its theories, the nature of theories generally, to ask for a theory to be proved or to not realise that one is being asked a nonsense question when one is being asked for a proof.

Also there is no such thing as "The" Scientific Method. Believing so is an example of scientism. Instead there are a myriad of methods each employed according to their appropriateness.

Ray comfort asks not for many but for just one observation that proves evolution.

But logic informs us that no single observation can ever unequivocally affirm a theoretical conditional proposition. For any one observation and even multiple observations there is always the possibility of some other explanation.

As I have said before - observing the sun's rising might be invoked in support of heliocentircism yet that one observation could be more simply invoked in support of geocentricism - such is the weakness in pursuits of some single observation that supports or proves a theory. The strength of a theory is measured not by it being supported by one single observation but rather that it explains all related observations and so not one but a list of observation must be invoked when trying to demonstrate that a theory is a good explanation.

The theory of evolution is not confirmed and, like all theories, nor can it be proved, by any single observation - instead it is the best explanation for all the observations taken as a whole.

Further reading: Science and Pseudoscience by Imre Lakatos