time-nuts@lists.febo.com

Discussion of precise time and frequency measurement

View all threads

A philosophy of science view on the tight pll discussion

UB
Ulrich Bangert
Thu, Jun 3, 2010 12:15 PM

Gentlemen,

the discussion between Bruce and Warren concerning Warren's implementation
of NIST's "Tight PLL Method" has caused quite a stir in our group.

My scientifical knowledge about the discussed topic is so much inferior
compared to Bruce's one that I don't have the heart to enter a contribution
to the discussion itself. It may however be helpful to have a look at the
discussion from a "philosophy of science" point of view.

The most basic form of logic is the propositional logic. A proposition in
the definition of propositional logic is a linguistic entity which can be
assigned a logic value like "true" or "false" or "0" or "1" without any
ambiguity. Whether a proposion is true or false may depend on circumstances.
For example the proposition "Today is tuesday" is true on tuesdays and wrong
on all other days of week.

Other proposions are true or false due to their logic construction. The
combined proposition "Today is tuesday or today is not tuesday" is always
true from a logic point of view despite the fact that you may consider it as
kind of "useless".

Propositional logic then deals with the question what happens when two or
more propositions are combined by logic operators as in the second example
with the operator "or". Since a proposition, say "a", and a second
proposition, say "b", can only have the values of "0" or "1" it is easy to
put every possible combination of a and b values into a simple diagram, for
example for the "or" operator:

a  b  a or b

0  0    0
0  1    1
1  0    1
1  1    1

Most if not all of us not only know such diagrams but really make use of
them in digital electronics. The well known operators are the "or", the
"and" and the "negation" and indeed it can be shown that ALL digital
operators can be constructed by a a combination of "negation" and either
"and" or "or". BTW this is the reason why the first logic circuit to appear
as a single chip, the 7400, was a quad NAND gate, a combination of
"negation" and "and". The designers had learned their lesson and made their
very first chip in a way that ALL possible combinations of two input
variables could be realized with one type of chip.

Nevertheless the 3rd column of the above diagram can be considered a
four-digit binay value and so it becomes immediately clear that their must
be a total of 16 different logic operators whith each of them  producing a
number between 0 and 15 (Decimal) or rather 1111 (Binary) in the 3rd column.
Each of these operators has a name of its own. Although widely used in
common speech one of the not so well known operators is the "formal
implication", or "a implies b" as we say or "b follows from a".

The "formal implication" has the logic diagram (which is identical to "(not
a) or b"):

a  b  a -> b

0  0    1
0  1    1
1  0    0
1  1    1

What may look unspectular at the first glance in effect holds two of the
most important supports of ALL scientific reasoning:

While the third row of the diagram basically says that it not possible to
achieve wrong results when logic is applied correctly to correct
propositions, rows one and two say that logic may deliver wrong results
(line one) or correct results (line two) if applied correctly to WRONG
(false) propositions. That is why already ancient logicians knew:

Ex falsi omnis

which freely translated from Latin means as much as: "From wrong
propositions everything can be condluded".

One of the consequences of this is the fact that for a true proposition "b"
the inference to the trueness of the proposition "a" from that it has been
concluded is NOT possible.

A second consequence of this is that NO scientifical theory can be verified
by an experiment. A theory may formulate a proposition on the outcome of a
certain experiment. Even if the outcome of the experiment and the
proposition are in good congruence it would be completely wrong to infere
that the theory is correct due to the experiment.

It is possible to harden the theory by experiments. For this purpose it is
necessary to produce a big number of different and indpendend propositions
based on the theory and test each single proposition with an experiment. The
more propositions and the more experiments the chance that the theory is
correct increases but note that even with an unbound number of propositions
and experiments this is no proof of the theory. Interesting enough that you
need ony a SINGLE experiment to falsify a theory if the outcome of the
experiment is different from the theory's proposition. What can really be
infered from experiments and observations may also be shown by the following
joke:

A physicist, a mathematician and a logician are sitting in a train riding
through Germany. Suddenly they notice a herd of sheep whith all being white
with the exception of one which is black.

The physiscist: "That is a proof that there are black sheeps in Germay"

The mathematician: "You physicists are using the term 'proof' in a too
relaxed way. If at all this is a proof that there is at least ONE black
sheep in Germany"

The logician: "Let's get serious: This is a proof that there is at least ONE
sheep in Germany with ONE BLACK SIDE".

So, what the heck has this all to do with the tight pll discussion? One
thing that I had to read in a time nuts mail of the last days was:

It doesnt, it only appears to in a very
restricted set of circumstances.

Bruce, I don't understand you, when presented
with visual evidence that this method works
you still deny it.

.
.

That doesn't work as it has the wrong
transfer function.

Again, it it does not work, how come the
evidence shows that it does, how do you
explain that Bruce?

Due to the criteria explained above the term "evidence" is used here in a
too far-ranging way. The experiment performed by John Miles is NOT a
"experimentum diaboli" in the sense that the outcome of the experiment would
enable us to decide whether Bruce's or Warren's theory about his
implementation of the NIST tight pll method is correct. It is not because it
has not falsified anything.

As far as my limited understanding of the topic allows me to judge: The
outcome of the experiment is not a direct antithesis to anything that Bruce
has remarked and if I see it correct the outcome of the experiment is by no
means contested by Bruce. However, if we want to check who's right and who's
wrong with experiments, we need to know that we need a lot of experiments
with different references and different DUTs. If all combinations of all
DUTs and all references in the hands of time nuts would lead to equally well
results as in John Miles's experiment, that would allow to conclude that the
method works ok for all practical aspects of time nuts life (however without
the guarantee for every future experiment outcome). Having not done these
experiments yet who knows whether there is a falsifying experiment among the
set of combinations?

Best regards

Ulrich Bangert
www.ulrich-bangert.de
Ortholzer Weg 1
27243 Gross Ippener

Gentlemen, the discussion between Bruce and Warren concerning Warren's implementation of NIST's "Tight PLL Method" has caused quite a stir in our group. My scientifical knowledge about the discussed topic is so much inferior compared to Bruce's one that I don't have the heart to enter a contribution to the discussion itself. It may however be helpful to have a look at the discussion from a "philosophy of science" point of view. The most basic form of logic is the propositional logic. A proposition in the definition of propositional logic is a linguistic entity which can be assigned a logic value like "true" or "false" or "0" or "1" without any ambiguity. Whether a proposion is true or false may depend on circumstances. For example the proposition "Today is tuesday" is true on tuesdays and wrong on all other days of week. Other proposions are true or false due to their logic construction. The combined proposition "Today is tuesday or today is not tuesday" is always true from a logic point of view despite the fact that you may consider it as kind of "useless". Propositional logic then deals with the question what happens when two or more propositions are combined by logic operators as in the second example with the operator "or". Since a proposition, say "a", and a second proposition, say "b", can only have the values of "0" or "1" it is easy to put every possible combination of a and b values into a simple diagram, for example for the "or" operator: a b a or b ------------ 0 0 0 0 1 1 1 0 1 1 1 1 Most if not all of us not only know such diagrams but really make use of them in digital electronics. The well known operators are the "or", the "and" and the "negation" and indeed it can be shown that ALL digital operators can be constructed by a a combination of "negation" and either "and" or "or". BTW this is the reason why the first logic circuit to appear as a single chip, the 7400, was a quad NAND gate, a combination of "negation" and "and". The designers had learned their lesson and made their very first chip in a way that ALL possible combinations of two input variables could be realized with one type of chip. Nevertheless the 3rd column of the above diagram can be considered a four-digit binay value and so it becomes immediately clear that their must be a total of 16 different logic operators whith each of them producing a number between 0 and 15 (Decimal) or rather 1111 (Binary) in the 3rd column. Each of these operators has a name of its own. Although widely used in common speech one of the not so well known operators is the "formal implication", or "a implies b" as we say or "b follows from a". The "formal implication" has the logic diagram (which is identical to "(not a) or b"): a b a -> b ------------ 0 0 1 0 1 1 1 0 0 1 1 1 What may look unspectular at the first glance in effect holds two of the most important supports of ALL scientific reasoning: While the third row of the diagram basically says that it not possible to achieve wrong results when logic is applied correctly to correct propositions, rows one and two say that logic may deliver wrong results (line one) or correct results (line two) if applied correctly to WRONG (false) propositions. That is why already ancient logicians knew: Ex falsi omnis which freely translated from Latin means as much as: "From wrong propositions everything can be condluded". One of the consequences of this is the fact that for a true proposition "b" the inference to the trueness of the proposition "a" from that it has been concluded is NOT possible. A second consequence of this is that NO scientifical theory can be verified by an experiment. A theory may formulate a proposition on the outcome of a certain experiment. Even if the outcome of the experiment and the proposition are in good congruence it would be completely wrong to infere that the theory is correct due to the experiment. It is possible to harden the theory by experiments. For this purpose it is necessary to produce a big number of different and indpendend propositions based on the theory and test each single proposition with an experiment. The more propositions and the more experiments the chance that the theory is correct increases but note that even with an unbound number of propositions and experiments this is no proof of the theory. Interesting enough that you need ony a SINGLE experiment to falsify a theory if the outcome of the experiment is different from the theory's proposition. What can really be infered from experiments and observations may also be shown by the following joke: A physicist, a mathematician and a logician are sitting in a train riding through Germany. Suddenly they notice a herd of sheep whith all being white with the exception of one which is black. The physiscist: "That is a proof that there are black sheeps in Germay" The mathematician: "You physicists are using the term 'proof' in a too relaxed way. If at all this is a proof that there is at least ONE black sheep in Germany" The logician: "Let's get serious: This is a proof that there is at least ONE sheep in Germany with ONE BLACK SIDE". So, what the heck has this all to do with the tight pll discussion? One thing that I had to read in a time nuts mail of the last days was: >> It doesnt, it only appears to in a very >> restricted set of circumstances. > Bruce, I don't understand you, when presented > with visual evidence that this method works > you still deny it. . . >> That doesn't work as it has the wrong >> transfer function. > Again, it it does not work, how come the > evidence shows that it does, how do you > explain that Bruce? Due to the criteria explained above the term "evidence" is used here in a too far-ranging way. The experiment performed by John Miles is NOT a "experimentum diaboli" in the sense that the outcome of the experiment would enable us to decide whether Bruce's or Warren's theory about his implementation of the NIST tight pll method is correct. It is not because it has not falsified anything. As far as my limited understanding of the topic allows me to judge: The outcome of the experiment is not a direct antithesis to anything that Bruce has remarked and if I see it correct the outcome of the experiment is by no means contested by Bruce. However, if we want to check who's right and who's wrong with experiments, we need to know that we need a lot of experiments with different references and different DUTs. If all combinations of all DUTs and all references in the hands of time nuts would lead to equally well results as in John Miles's experiment, that would allow to conclude that the method works ok for all practical aspects of time nuts life (however without the guarantee for every future experiment outcome). Having not done these experiments yet who knows whether there is a falsifying experiment among the set of combinations? Best regards Ulrich Bangert www.ulrich-bangert.de Ortholzer Weg 1 27243 Gross Ippener
JL
J. L. Trantham
Thu, Jun 3, 2010 1:35 PM

Thanks,

I had not thought about this in years.

Joe

-----Original Message-----
From: time-nuts-bounces@febo.com [mailto:time-nuts-bounces@febo.com] On
Behalf Of Ulrich Bangert
Sent: Thursday, June 03, 2010 7:16 AM
To: Time nuts
Subject: [time-nuts] A philosophy of science view on the tight pll
discussion

Gentlemen,

the discussion between Bruce and Warren concerning Warren's implementation
of NIST's "Tight PLL Method" has caused quite a stir in our group.

My scientifical knowledge about the discussed topic is so much inferior
compared to Bruce's one that I don't have the heart to enter a contribution
to the discussion itself. It may however be helpful to have a look at the
discussion from a "philosophy of science" point of view.

The most basic form of logic is the propositional logic. A proposition in
the definition of propositional logic is a linguistic entity which can be
assigned a logic value like "true" or "false" or "0" or "1" without any
ambiguity. Whether a proposion is true or false may depend on circumstances.
For example the proposition "Today is tuesday" is true on tuesdays and wrong
on all other days of week.

Other proposions are true or false due to their logic construction. The
combined proposition "Today is tuesday or today is not tuesday" is always
true from a logic point of view despite the fact that you may consider it as
kind of "useless".

Propositional logic then deals with the question what happens when two or
more propositions are combined by logic operators as in the second example
with the operator "or". Since a proposition, say "a", and a second
proposition, say "b", can only have the values of "0" or "1" it is easy to
put every possible combination of a and b values into a simple diagram, for
example for the "or" operator:

a  b  a or b

0  0    0
0  1    1
1  0    1
1  1    1

Most if not all of us not only know such diagrams but really make use of
them in digital electronics. The well known operators are the "or", the
"and" and the "negation" and indeed it can be shown that ALL digital
operators can be constructed by a a combination of "negation" and either
"and" or "or". BTW this is the reason why the first logic circuit to appear
as a single chip, the 7400, was a quad NAND gate, a combination of
"negation" and "and". The designers had learned their lesson and made their
very first chip in a way that ALL possible combinations of two input
variables could be realized with one type of chip.

Nevertheless the 3rd column of the above diagram can be considered a
four-digit binay value and so it becomes immediately clear that their must
be a total of 16 different logic operators whith each of them  producing a
number between 0 and 15 (Decimal) or rather 1111 (Binary) in the 3rd column.
Each of these operators has a name of its own. Although widely used in
common speech one of the not so well known operators is the "formal
implication", or "a implies b" as we say or "b follows from a".

The "formal implication" has the logic diagram (which is identical to "(not
a) or b"):

a  b  a -> b

0  0    1
0  1    1
1  0    0
1  1    1

What may look unspectular at the first glance in effect holds two of the
most important supports of ALL scientific reasoning:

While the third row of the diagram basically says that it not possible to
achieve wrong results when logic is applied correctly to correct
propositions, rows one and two say that logic may deliver wrong results
(line one) or correct results (line two) if applied correctly to WRONG
(false) propositions. That is why already ancient logicians knew:

Ex falsi omnis

which freely translated from Latin means as much as: "From wrong
propositions everything can be condluded".

One of the consequences of this is the fact that for a true proposition "b"
the inference to the trueness of the proposition "a" from that it has been
concluded is NOT possible.

A second consequence of this is that NO scientifical theory can be verified
by an experiment. A theory may formulate a proposition on the outcome of a
certain experiment. Even if the outcome of the experiment and the
proposition are in good congruence it would be completely wrong to infere
that the theory is correct due to the experiment.

It is possible to harden the theory by experiments. For this purpose it is
necessary to produce a big number of different and indpendend propositions
based on the theory and test each single proposition with an experiment. The
more propositions and the more experiments the chance that the theory is
correct increases but note that even with an unbound number of propositions
and experiments this is no proof of the theory. Interesting enough that you
need ony a SINGLE experiment to falsify a theory if the outcome of the
experiment is different from the theory's proposition. What can really be
infered from experiments and observations may also be shown by the following
joke:

A physicist, a mathematician and a logician are sitting in a train riding
through Germany. Suddenly they notice a herd of sheep whith all being white
with the exception of one which is black.

The physiscist: "That is a proof that there are black sheeps in Germay"

The mathematician: "You physicists are using the term 'proof' in a too
relaxed way. If at all this is a proof that there is at least ONE black
sheep in Germany"

The logician: "Let's get serious: This is a proof that there is at least ONE
sheep in Germany with ONE BLACK SIDE".

So, what the heck has this all to do with the tight pll discussion? One
thing that I had to read in a time nuts mail of the last days was:

It doesnt, it only appears to in a very
restricted set of circumstances.

Bruce, I don't understand you, when presented
with visual evidence that this method works
you still deny it.

.
.

That doesn't work as it has the wrong
transfer function.

Again, it it does not work, how come the
evidence shows that it does, how do you
explain that Bruce?

Due to the criteria explained above the term "evidence" is used here in a
too far-ranging way. The experiment performed by John Miles is NOT a
"experimentum diaboli" in the sense that the outcome of the experiment would
enable us to decide whether Bruce's or Warren's theory about his
implementation of the NIST tight pll method is correct. It is not because it
has not falsified anything.

As far as my limited understanding of the topic allows me to judge: The
outcome of the experiment is not a direct antithesis to anything that Bruce
has remarked and if I see it correct the outcome of the experiment is by no
means contested by Bruce. However, if we want to check who's right and who's
wrong with experiments, we need to know that we need a lot of experiments
with different references and different DUTs. If all combinations of all
DUTs and all references in the hands of time nuts would lead to equally well
results as in John Miles's experiment, that would allow to conclude that the
method works ok for all practical aspects of time nuts life (however without
the guarantee for every future experiment outcome). Having not done these
experiments yet who knows whether there is a falsifying experiment among the
set of combinations?

Best regards

Ulrich Bangert
www.ulrich-bangert.de
Ortholzer Weg 1
27243 Gross Ippener


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Thanks, I had not thought about this in years. Joe -----Original Message----- From: time-nuts-bounces@febo.com [mailto:time-nuts-bounces@febo.com] On Behalf Of Ulrich Bangert Sent: Thursday, June 03, 2010 7:16 AM To: Time nuts Subject: [time-nuts] A philosophy of science view on the tight pll discussion Gentlemen, the discussion between Bruce and Warren concerning Warren's implementation of NIST's "Tight PLL Method" has caused quite a stir in our group. My scientifical knowledge about the discussed topic is so much inferior compared to Bruce's one that I don't have the heart to enter a contribution to the discussion itself. It may however be helpful to have a look at the discussion from a "philosophy of science" point of view. The most basic form of logic is the propositional logic. A proposition in the definition of propositional logic is a linguistic entity which can be assigned a logic value like "true" or "false" or "0" or "1" without any ambiguity. Whether a proposion is true or false may depend on circumstances. For example the proposition "Today is tuesday" is true on tuesdays and wrong on all other days of week. Other proposions are true or false due to their logic construction. The combined proposition "Today is tuesday or today is not tuesday" is always true from a logic point of view despite the fact that you may consider it as kind of "useless". Propositional logic then deals with the question what happens when two or more propositions are combined by logic operators as in the second example with the operator "or". Since a proposition, say "a", and a second proposition, say "b", can only have the values of "0" or "1" it is easy to put every possible combination of a and b values into a simple diagram, for example for the "or" operator: a b a or b ------------ 0 0 0 0 1 1 1 0 1 1 1 1 Most if not all of us not only know such diagrams but really make use of them in digital electronics. The well known operators are the "or", the "and" and the "negation" and indeed it can be shown that ALL digital operators can be constructed by a a combination of "negation" and either "and" or "or". BTW this is the reason why the first logic circuit to appear as a single chip, the 7400, was a quad NAND gate, a combination of "negation" and "and". The designers had learned their lesson and made their very first chip in a way that ALL possible combinations of two input variables could be realized with one type of chip. Nevertheless the 3rd column of the above diagram can be considered a four-digit binay value and so it becomes immediately clear that their must be a total of 16 different logic operators whith each of them producing a number between 0 and 15 (Decimal) or rather 1111 (Binary) in the 3rd column. Each of these operators has a name of its own. Although widely used in common speech one of the not so well known operators is the "formal implication", or "a implies b" as we say or "b follows from a". The "formal implication" has the logic diagram (which is identical to "(not a) or b"): a b a -> b ------------ 0 0 1 0 1 1 1 0 0 1 1 1 What may look unspectular at the first glance in effect holds two of the most important supports of ALL scientific reasoning: While the third row of the diagram basically says that it not possible to achieve wrong results when logic is applied correctly to correct propositions, rows one and two say that logic may deliver wrong results (line one) or correct results (line two) if applied correctly to WRONG (false) propositions. That is why already ancient logicians knew: Ex falsi omnis which freely translated from Latin means as much as: "From wrong propositions everything can be condluded". One of the consequences of this is the fact that for a true proposition "b" the inference to the trueness of the proposition "a" from that it has been concluded is NOT possible. A second consequence of this is that NO scientifical theory can be verified by an experiment. A theory may formulate a proposition on the outcome of a certain experiment. Even if the outcome of the experiment and the proposition are in good congruence it would be completely wrong to infere that the theory is correct due to the experiment. It is possible to harden the theory by experiments. For this purpose it is necessary to produce a big number of different and indpendend propositions based on the theory and test each single proposition with an experiment. The more propositions and the more experiments the chance that the theory is correct increases but note that even with an unbound number of propositions and experiments this is no proof of the theory. Interesting enough that you need ony a SINGLE experiment to falsify a theory if the outcome of the experiment is different from the theory's proposition. What can really be infered from experiments and observations may also be shown by the following joke: A physicist, a mathematician and a logician are sitting in a train riding through Germany. Suddenly they notice a herd of sheep whith all being white with the exception of one which is black. The physiscist: "That is a proof that there are black sheeps in Germay" The mathematician: "You physicists are using the term 'proof' in a too relaxed way. If at all this is a proof that there is at least ONE black sheep in Germany" The logician: "Let's get serious: This is a proof that there is at least ONE sheep in Germany with ONE BLACK SIDE". So, what the heck has this all to do with the tight pll discussion? One thing that I had to read in a time nuts mail of the last days was: >> It doesnt, it only appears to in a very >> restricted set of circumstances. > Bruce, I don't understand you, when presented > with visual evidence that this method works > you still deny it. . . >> That doesn't work as it has the wrong >> transfer function. > Again, it it does not work, how come the > evidence shows that it does, how do you > explain that Bruce? Due to the criteria explained above the term "evidence" is used here in a too far-ranging way. The experiment performed by John Miles is NOT a "experimentum diaboli" in the sense that the outcome of the experiment would enable us to decide whether Bruce's or Warren's theory about his implementation of the NIST tight pll method is correct. It is not because it has not falsified anything. As far as my limited understanding of the topic allows me to judge: The outcome of the experiment is not a direct antithesis to anything that Bruce has remarked and if I see it correct the outcome of the experiment is by no means contested by Bruce. However, if we want to check who's right and who's wrong with experiments, we need to know that we need a lot of experiments with different references and different DUTs. If all combinations of all DUTs and all references in the hands of time nuts would lead to equally well results as in John Miles's experiment, that would allow to conclude that the method works ok for all practical aspects of time nuts life (however without the guarantee for every future experiment outcome). Having not done these experiments yet who knows whether there is a falsifying experiment among the set of combinations? Best regards Ulrich Bangert www.ulrich-bangert.de Ortholzer Weg 1 27243 Gross Ippener _______________________________________________ time-nuts mailing list -- time-nuts@febo.com To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the instructions there.
SR
Steve Rooke
Thu, Jun 3, 2010 1:54 PM

Ulrich,

So what's this got to do with black sheep. Was this some form of
Freudian slip by you Ulrich :)

So, lets examine what we are looking at here, this has now been
simplified down to a single true or false value which would be of
value if we were looking at a single single data point. That data
point could be transformed into a value which could produce a correct
answer but, as already pointed out, this is only a single point and
holds no value. To get real here, we have seen the results of hundreds
of collected data points which have been processed mathematically to
give answers for different measurement intervals. If you took John's 2
dimensional graphs and selected a single point along the x-axes you
would make a 1 dimensional graph of a single answer and statistically,
and logically, there would be little trust in it.

We can all see that for different Tau, the data collected and
transformed produces very similar congruence. Now, the graphs we have
seen are not straight lines so this indicates that the data taken by
both measurement systems must have some form of difference when
calculated for each different Tau. To understand what is going on here
it would be extremely coincidental for the full results of the two
methods to show such congruence if one of them was using some
incorrect logic. These graphs are not something that is plotted
linearly with time given the current measurement point. Each point on
the graph is a result of the mathematical transformation of the
collected data before it. It is very unlikely for this to be a
coincidence, you cannot take junk, transform it by junk and expect to
get repeatable congruence between two measurement methods over such a
wide span of Tau values. We have to remember that each data point on
an ADEV graph represents a separate calculation on measurements, not
an individual measurement itself.

I fully agree that this test was only done for one input source and it
should be done for as many different types that can be found before
this method is deemed to be usable but at the same time the initial
results of this test is so very encouraging that it is hard to deny
that something is looking interesting here. Trying to throw the baby
out with the bath water at this stage would be pointless and just
smacks at the initial response to those that said the World is round
and not flat. It is that sort of entrenched belief that we need to
break and get the flat Earth believers to act constructively in this
exploration.

Best regards,
Steve

On 4 June 2010 00:15, Ulrich Bangert df6jb@ulrich-bangert.de wrote:

Gentlemen,

the discussion between Bruce and Warren concerning Warren's implementation
of NIST's "Tight PLL Method" has caused quite a stir in our group.

My scientifical knowledge about the discussed topic is so much inferior
compared to Bruce's one that I don't have the heart to enter a contribution
to the discussion itself. It may however be helpful to have a look at the
discussion from a "philosophy of science" point of view.

The most basic form of logic is the propositional logic. A proposition in
the definition of propositional logic is a linguistic entity which can be
assigned a logic value like "true" or "false" or "0" or "1" without any
ambiguity. Whether a proposion is true or false may depend on circumstances.
For example the proposition "Today is tuesday" is true on tuesdays and wrong
on all other days of week.

Other proposions are true or false due to their logic construction. The
combined proposition "Today is tuesday or today is not tuesday" is always
true from a logic point of view despite the fact that you may consider it as
kind of "useless".

Propositional logic then deals with the question what happens when two or
more propositions are combined by logic operators as in the second example
with the operator "or". Since a proposition, say "a", and a second
proposition, say "b", can only have the values of "0" or "1" it is easy to
put every possible combination of a and b values into a simple diagram, for
example for the "or" operator:

a  b  a or b

0  0     0
0  1     1
1  0     1
1  1     1

Most if not all of us not only know such diagrams but really make use of
them in digital electronics. The well known operators are the "or", the
"and" and the "negation" and indeed it can be shown that ALL digital
operators can be constructed by a a combination of "negation" and either
"and" or "or". BTW this is the reason why the first logic circuit to appear
as a single chip, the 7400, was a quad NAND gate, a combination of
"negation" and "and". The designers had learned their lesson and made their
very first chip in a way that ALL possible combinations of two input
variables could be realized with one type of chip.

Nevertheless the 3rd column of the above diagram can be considered a
four-digit binay value and so it becomes immediately clear that their must
be a total of 16 different logic operators whith each of them  producing a
number between 0 and 15 (Decimal) or rather 1111 (Binary) in the 3rd column.
Each of these operators has a name of its own. Although widely used in
common speech one of the not so well known operators is the "formal
implication", or "a implies b" as we say or "b follows from a".

The "formal implication" has the logic diagram (which is identical to "(not
a) or b"):

a  b  a -> b

0  0     1
0  1     1
1  0     0
1  1     1

What may look unspectular at the first glance in effect holds two of the
most important supports of ALL scientific reasoning:

While the third row of the diagram basically says that it not possible to
achieve wrong results when logic is applied correctly to correct
propositions, rows one and two say that logic may deliver wrong results
(line one) or correct results (line two) if applied correctly to WRONG
(false) propositions. That is why already ancient logicians knew:

Ex falsi omnis

which freely translated from Latin means as much as: "From wrong
propositions everything can be condluded".

One of the consequences of this is the fact that for a true proposition "b"
the inference to the trueness of the proposition "a" from that it has been
concluded is NOT possible.

A second consequence of this is that NO scientifical theory can be verified
by an experiment. A theory may formulate a proposition on the outcome of a
certain experiment. Even if the outcome of the experiment and the
proposition are in good congruence it would be completely wrong to infere
that the theory is correct due to the experiment.

It is possible to harden the theory by experiments. For this purpose it is
necessary to produce a big number of different and indpendend propositions
based on the theory and test each single proposition with an experiment. The
more propositions and the more experiments the chance that the theory is
correct increases but note that even with an unbound number of propositions
and experiments this is no proof of the theory. Interesting enough that you
need ony a SINGLE experiment to falsify a theory if the outcome of the
experiment is different from the theory's proposition. What can really be
infered from experiments and observations may also be shown by the following
joke:

A physicist, a mathematician and a logician are sitting in a train riding
through Germany. Suddenly they notice a herd of sheep whith all being white
with the exception of one which is black.

The physiscist: "That is a proof that there are black sheeps in Germay"

The mathematician: "You physicists are using the term 'proof' in a too
relaxed way. If at all this is a proof that there is at least ONE black
sheep in Germany"

The logician: "Let's get serious: This is a proof that there is at least ONE
sheep in Germany with ONE BLACK SIDE".

So, what the heck has this all to do with the tight pll discussion? One
thing that I had to read in a time nuts mail of the last days was:

It doesnt, it only appears to in a very
restricted set of circumstances.

Bruce, I don't understand you, when presented
with visual evidence that this method works
you still deny it.

.
.

That doesn't work as it has the wrong
transfer function.

Again, it it does not work, how come the
evidence shows that it does, how do you
explain that Bruce?

Due to the criteria explained above the term "evidence" is used here in a
too far-ranging way. The experiment performed by John Miles is NOT a
"experimentum diaboli" in the sense that the outcome of the experiment would
enable us to decide whether Bruce's or Warren's theory about his
implementation of the NIST tight pll method is correct. It is not because it
has not falsified anything.

As far as my limited understanding of the topic allows me to judge: The
outcome of the experiment is not a direct antithesis to anything that Bruce
has remarked and if I see it correct the outcome of the experiment is by no
means contested by Bruce. However, if we want to check who's right and who's
wrong with experiments, we need to know that we need a lot of experiments
with different references and different DUTs. If all combinations of all
DUTs and all references in the hands of time nuts would lead to equally well
results as in John Miles's experiment, that would allow to conclude that the
method works ok for all practical aspects of time nuts life (however without
the guarantee for every future experiment outcome). Having not done these
experiments yet who knows whether there is a falsifying experiment among the
set of combinations?

Best regards

Ulrich Bangert
www.ulrich-bangert.de
Ortholzer Weg 1
27243 Gross Ippener


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

--
Steve Rooke - ZL3TUV & G8KVD
The only reason for time is so that everything doesn't happen at once.

  • Einstein
Ulrich, So what's this got to do with black sheep. Was this some form of Freudian slip by you Ulrich :) So, lets examine what we are looking at here, this has now been simplified down to a single true or false value which would be of value if we were looking at a single single data point. That data point could be transformed into a value which could produce a correct answer but, as already pointed out, this is only a single point and holds no value. To get real here, we have seen the results of hundreds of collected data points which have been processed mathematically to give answers for different measurement intervals. If you took John's 2 dimensional graphs and selected a single point along the x-axes you would make a 1 dimensional graph of a single answer and statistically, and logically, there would be little trust in it. We can all see that for different Tau, the data collected and transformed produces very similar congruence. Now, the graphs we have seen are not straight lines so this indicates that the data taken by both measurement systems must have some form of difference when calculated for each different Tau. To understand what is going on here it would be extremely coincidental for the full results of the two methods to show such congruence if one of them was using some incorrect logic. These graphs are not something that is plotted linearly with time given the current measurement point. Each point on the graph is a result of the mathematical transformation of the collected data before it. It is very unlikely for this to be a coincidence, you cannot take junk, transform it by junk and expect to get repeatable congruence between two measurement methods over such a wide span of Tau values. We have to remember that each data point on an ADEV graph represents a separate calculation on measurements, not an individual measurement itself. I fully agree that this test was only done for one input source and it should be done for as many different types that can be found before this method is deemed to be usable but at the same time the initial results of this test is so very encouraging that it is hard to deny that something is looking interesting here. Trying to throw the baby out with the bath water at this stage would be pointless and just smacks at the initial response to those that said the World is round and not flat. It is that sort of entrenched belief that we need to break and get the flat Earth believers to act constructively in this exploration. Best regards, Steve On 4 June 2010 00:15, Ulrich Bangert <df6jb@ulrich-bangert.de> wrote: > Gentlemen, > > the discussion between Bruce and Warren concerning Warren's implementation > of NIST's "Tight PLL Method" has caused quite a stir in our group. > > My scientifical knowledge about the discussed topic is so much inferior > compared to Bruce's one that I don't have the heart to enter a contribution > to the discussion itself. It may however be helpful to have a look at the > discussion from a "philosophy of science" point of view. > > The most basic form of logic is the propositional logic. A proposition in > the definition of propositional logic is a linguistic entity which can be > assigned a logic value like "true" or "false" or "0" or "1" without any > ambiguity. Whether a proposion is true or false may depend on circumstances. > For example the proposition "Today is tuesday" is true on tuesdays and wrong > on all other days of week. > > Other proposions are true or false due to their logic construction. The > combined proposition "Today is tuesday or today is not tuesday" is always > true from a logic point of view despite the fact that you may consider it as > kind of "useless". > > Propositional logic then deals with the question what happens when two or > more propositions are combined by logic operators as in the second example > with the operator "or". Since a proposition, say "a", and a second > proposition, say "b", can only have the values of "0" or "1" it is easy to > put every possible combination of a and b values into a simple diagram, for > example for the "or" operator: > > a  b  a or b > ------------ > 0  0     0 > 0  1     1 > 1  0     1 > 1  1     1 > > Most if not all of us not only know such diagrams but really make use of > them in digital electronics. The well known operators are the "or", the > "and" and the "negation" and indeed it can be shown that ALL digital > operators can be constructed by a a combination of "negation" and either > "and" or "or". BTW this is the reason why the first logic circuit to appear > as a single chip, the 7400, was a quad NAND gate, a combination of > "negation" and "and". The designers had learned their lesson and made their > very first chip in a way that ALL possible combinations of two input > variables could be realized with one type of chip. > > Nevertheless the 3rd column of the above diagram can be considered a > four-digit binay value and so it becomes immediately clear that their must > be a total of 16 different logic operators whith each of them  producing a > number between 0 and 15 (Decimal) or rather 1111 (Binary) in the 3rd column. > Each of these operators has a name of its own. Although widely used in > common speech one of the not so well known operators is the "formal > implication", or "a implies b" as we say or "b follows from a". > > The "formal implication" has the logic diagram (which is identical to "(not > a) or b"): > > a  b  a -> b > ------------ > 0  0     1 > 0  1     1 > 1  0     0 > 1  1     1 > > What may look unspectular at the first glance in effect holds two of the > most important supports of ALL scientific reasoning: > > While the third row of the diagram basically says that it not possible to > achieve wrong results when logic is applied correctly to correct > propositions, rows one and two say that logic may deliver wrong results > (line one) or correct results (line two) if applied correctly to WRONG > (false) propositions. That is why already ancient logicians knew: > > Ex falsi omnis > > which freely translated from Latin means as much as: "From wrong > propositions everything can be condluded". > > One of the consequences of this is the fact that for a true proposition "b" > the inference to the trueness of the proposition "a" from that it has been > concluded is NOT possible. > > A second consequence of this is that NO scientifical theory can be verified > by an experiment. A theory may formulate a proposition on the outcome of a > certain experiment. Even if the outcome of the experiment and the > proposition are in good congruence it would be completely wrong to infere > that the theory is correct due to the experiment. > > It is possible to harden the theory by experiments. For this purpose it is > necessary to produce a big number of different and indpendend propositions > based on the theory and test each single proposition with an experiment. The > more propositions and the more experiments the chance that the theory is > correct increases but note that even with an unbound number of propositions > and experiments this is no proof of the theory. Interesting enough that you > need ony a SINGLE experiment to falsify a theory if the outcome of the > experiment is different from the theory's proposition. What can really be > infered from experiments and observations may also be shown by the following > joke: > > A physicist, a mathematician and a logician are sitting in a train riding > through Germany. Suddenly they notice a herd of sheep whith all being white > with the exception of one which is black. > > The physiscist: "That is a proof that there are black sheeps in Germay" > > The mathematician: "You physicists are using the term 'proof' in a too > relaxed way. If at all this is a proof that there is at least ONE black > sheep in Germany" > > The logician: "Let's get serious: This is a proof that there is at least ONE > sheep in Germany with ONE BLACK SIDE". > > So, what the heck has this all to do with the tight pll discussion? One > thing that I had to read in a time nuts mail of the last days was: > >>> It doesnt, it only appears to in a very >>> restricted set of circumstances. > >> Bruce, I don't understand you, when presented >> with visual evidence that this method works >> you still deny it. > . > . >>> That doesn't work as it has the wrong >>> transfer function. > >> Again, it it does not work, how come the >> evidence shows that it does, how do you >> explain that Bruce? > > Due to the criteria explained above the term "evidence" is used here in a > too far-ranging way. The experiment performed by John Miles is NOT a > "experimentum diaboli" in the sense that the outcome of the experiment would > enable us to decide whether Bruce's or Warren's theory about his > implementation of the NIST tight pll method is correct. It is not because it > has not falsified anything. > > As far as my limited understanding of the topic allows me to judge: The > outcome of the experiment is not a direct antithesis to anything that Bruce > has remarked and if I see it correct the outcome of the experiment is by no > means contested by Bruce. However, if we want to check who's right and who's > wrong with experiments, we need to know that we need a lot of experiments > with different references and different DUTs. If all combinations of all > DUTs and all references in the hands of time nuts would lead to equally well > results as in John Miles's experiment, that would allow to conclude that the > method works ok for all practical aspects of time nuts life (however without > the guarantee for every future experiment outcome). Having not done these > experiments yet who knows whether there is a falsifying experiment among the > set of combinations? > > Best regards > > Ulrich Bangert > www.ulrich-bangert.de > Ortholzer Weg 1 > 27243 Gross Ippener > > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. > -- Steve Rooke - ZL3TUV & G8KVD The only reason for time is so that everything doesn't happen at once. - Einstein
BC
Bob Camp
Thu, Jun 3, 2010 4:18 PM

Hi

To move the example to a time nuts centric view:

I can fire up z38xx and run it on my 3805 for a couple of days.

Bring up the "time variances window" and a lot of graphs come up.

The graphs clearly show that MTIE and TIE are different than ADEV for the
data set.

The graphs might lead you to believe that ADEV and overlapping ADEV are the
same thing. On the plot I have here Hadamard variance looks a lot like the
ADEV.  Other than TIE and MTIE, they all look a lot like ADEV on the plots
if you let me "correct" Mod ADEV just a bit.

The experiment with z38xx is only able to tell you just so much.

Bob

-----Original Message-----
From: time-nuts-bounces@febo.com [mailto:time-nuts-bounces@febo.com] On
Behalf Of Ulrich Bangert
Sent: Thursday, June 03, 2010 8:16 AM
To: Time nuts
Subject: [time-nuts] A philosophy of science view on the tight pll
discussion

Gentlemen,

the discussion between Bruce and Warren concerning Warren's implementation
of NIST's "Tight PLL Method" has caused quite a stir in our group.

My scientifical knowledge about the discussed topic is so much inferior
compared to Bruce's one that I don't have the heart to enter a contribution
to the discussion itself. It may however be helpful to have a look at the
discussion from a "philosophy of science" point of view.

The most basic form of logic is the propositional logic. A proposition in
the definition of propositional logic is a linguistic entity which can be
assigned a logic value like "true" or "false" or "0" or "1" without any
ambiguity. Whether a proposion is true or false may depend on circumstances.
For example the proposition "Today is tuesday" is true on tuesdays and wrong
on all other days of week.

Other proposions are true or false due to their logic construction. The
combined proposition "Today is tuesday or today is not tuesday" is always
true from a logic point of view despite the fact that you may consider it as
kind of "useless".

Propositional logic then deals with the question what happens when two or
more propositions are combined by logic operators as in the second example
with the operator "or". Since a proposition, say "a", and a second
proposition, say "b", can only have the values of "0" or "1" it is easy to
put every possible combination of a and b values into a simple diagram, for
example for the "or" operator:

a  b  a or b

0  0    0
0  1    1
1  0    1
1  1    1

Most if not all of us not only know such diagrams but really make use of
them in digital electronics. The well known operators are the "or", the
"and" and the "negation" and indeed it can be shown that ALL digital
operators can be constructed by a a combination of "negation" and either
"and" or "or". BTW this is the reason why the first logic circuit to appear
as a single chip, the 7400, was a quad NAND gate, a combination of
"negation" and "and". The designers had learned their lesson and made their
very first chip in a way that ALL possible combinations of two input
variables could be realized with one type of chip.

Nevertheless the 3rd column of the above diagram can be considered a
four-digit binay value and so it becomes immediately clear that their must
be a total of 16 different logic operators whith each of them  producing a
number between 0 and 15 (Decimal) or rather 1111 (Binary) in the 3rd column.
Each of these operators has a name of its own. Although widely used in
common speech one of the not so well known operators is the "formal
implication", or "a implies b" as we say or "b follows from a".

The "formal implication" has the logic diagram (which is identical to "(not
a) or b"):

a  b  a -> b

0  0    1
0  1    1
1  0    0
1  1    1

What may look unspectular at the first glance in effect holds two of the
most important supports of ALL scientific reasoning:

While the third row of the diagram basically says that it not possible to
achieve wrong results when logic is applied correctly to correct
propositions, rows one and two say that logic may deliver wrong results
(line one) or correct results (line two) if applied correctly to WRONG
(false) propositions. That is why already ancient logicians knew:

Ex falsi omnis

which freely translated from Latin means as much as: "From wrong
propositions everything can be condluded".

One of the consequences of this is the fact that for a true proposition "b"
the inference to the trueness of the proposition "a" from that it has been
concluded is NOT possible.

A second consequence of this is that NO scientifical theory can be verified
by an experiment. A theory may formulate a proposition on the outcome of a
certain experiment. Even if the outcome of the experiment and the
proposition are in good congruence it would be completely wrong to infere
that the theory is correct due to the experiment.

It is possible to harden the theory by experiments. For this purpose it is
necessary to produce a big number of different and indpendend propositions
based on the theory and test each single proposition with an experiment. The
more propositions and the more experiments the chance that the theory is
correct increases but note that even with an unbound number of propositions
and experiments this is no proof of the theory. Interesting enough that you
need ony a SINGLE experiment to falsify a theory if the outcome of the
experiment is different from the theory's proposition. What can really be
infered from experiments and observations may also be shown by the following
joke:

A physicist, a mathematician and a logician are sitting in a train riding
through Germany. Suddenly they notice a herd of sheep whith all being white
with the exception of one which is black.

The physiscist: "That is a proof that there are black sheeps in Germay"

The mathematician: "You physicists are using the term 'proof' in a too
relaxed way. If at all this is a proof that there is at least ONE black
sheep in Germany"

The logician: "Let's get serious: This is a proof that there is at least ONE
sheep in Germany with ONE BLACK SIDE".

So, what the heck has this all to do with the tight pll discussion? One
thing that I had to read in a time nuts mail of the last days was:

It doesnt, it only appears to in a very
restricted set of circumstances.

Bruce, I don't understand you, when presented
with visual evidence that this method works
you still deny it.

.
.

That doesn't work as it has the wrong
transfer function.

Again, it it does not work, how come the
evidence shows that it does, how do you
explain that Bruce?

Due to the criteria explained above the term "evidence" is used here in a
too far-ranging way. The experiment performed by John Miles is NOT a
"experimentum diaboli" in the sense that the outcome of the experiment would
enable us to decide whether Bruce's or Warren's theory about his
implementation of the NIST tight pll method is correct. It is not because it
has not falsified anything.

As far as my limited understanding of the topic allows me to judge: The
outcome of the experiment is not a direct antithesis to anything that Bruce
has remarked and if I see it correct the outcome of the experiment is by no
means contested by Bruce. However, if we want to check who's right and who's
wrong with experiments, we need to know that we need a lot of experiments
with different references and different DUTs. If all combinations of all
DUTs and all references in the hands of time nuts would lead to equally well
results as in John Miles's experiment, that would allow to conclude that the
method works ok for all practical aspects of time nuts life (however without
the guarantee for every future experiment outcome). Having not done these
experiments yet who knows whether there is a falsifying experiment among the
set of combinations?

Best regards

Ulrich Bangert
www.ulrich-bangert.de
Ortholzer Weg 1
27243 Gross Ippener


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi To move the example to a time nuts centric view: I can fire up z38xx and run it on my 3805 for a couple of days. Bring up the "time variances window" and a lot of graphs come up. The graphs clearly show that MTIE and TIE are different than ADEV for the data set. The graphs might lead you to believe that ADEV and overlapping ADEV are the same thing. On the plot I have here Hadamard variance looks a lot like the ADEV. Other than TIE and MTIE, they all look a lot like ADEV on the plots if you let me "correct" Mod ADEV just a bit. The experiment with z38xx is only able to tell you just so much. Bob -----Original Message----- From: time-nuts-bounces@febo.com [mailto:time-nuts-bounces@febo.com] On Behalf Of Ulrich Bangert Sent: Thursday, June 03, 2010 8:16 AM To: Time nuts Subject: [time-nuts] A philosophy of science view on the tight pll discussion Gentlemen, the discussion between Bruce and Warren concerning Warren's implementation of NIST's "Tight PLL Method" has caused quite a stir in our group. My scientifical knowledge about the discussed topic is so much inferior compared to Bruce's one that I don't have the heart to enter a contribution to the discussion itself. It may however be helpful to have a look at the discussion from a "philosophy of science" point of view. The most basic form of logic is the propositional logic. A proposition in the definition of propositional logic is a linguistic entity which can be assigned a logic value like "true" or "false" or "0" or "1" without any ambiguity. Whether a proposion is true or false may depend on circumstances. For example the proposition "Today is tuesday" is true on tuesdays and wrong on all other days of week. Other proposions are true or false due to their logic construction. The combined proposition "Today is tuesday or today is not tuesday" is always true from a logic point of view despite the fact that you may consider it as kind of "useless". Propositional logic then deals with the question what happens when two or more propositions are combined by logic operators as in the second example with the operator "or". Since a proposition, say "a", and a second proposition, say "b", can only have the values of "0" or "1" it is easy to put every possible combination of a and b values into a simple diagram, for example for the "or" operator: a b a or b ------------ 0 0 0 0 1 1 1 0 1 1 1 1 Most if not all of us not only know such diagrams but really make use of them in digital electronics. The well known operators are the "or", the "and" and the "negation" and indeed it can be shown that ALL digital operators can be constructed by a a combination of "negation" and either "and" or "or". BTW this is the reason why the first logic circuit to appear as a single chip, the 7400, was a quad NAND gate, a combination of "negation" and "and". The designers had learned their lesson and made their very first chip in a way that ALL possible combinations of two input variables could be realized with one type of chip. Nevertheless the 3rd column of the above diagram can be considered a four-digit binay value and so it becomes immediately clear that their must be a total of 16 different logic operators whith each of them producing a number between 0 and 15 (Decimal) or rather 1111 (Binary) in the 3rd column. Each of these operators has a name of its own. Although widely used in common speech one of the not so well known operators is the "formal implication", or "a implies b" as we say or "b follows from a". The "formal implication" has the logic diagram (which is identical to "(not a) or b"): a b a -> b ------------ 0 0 1 0 1 1 1 0 0 1 1 1 What may look unspectular at the first glance in effect holds two of the most important supports of ALL scientific reasoning: While the third row of the diagram basically says that it not possible to achieve wrong results when logic is applied correctly to correct propositions, rows one and two say that logic may deliver wrong results (line one) or correct results (line two) if applied correctly to WRONG (false) propositions. That is why already ancient logicians knew: Ex falsi omnis which freely translated from Latin means as much as: "From wrong propositions everything can be condluded". One of the consequences of this is the fact that for a true proposition "b" the inference to the trueness of the proposition "a" from that it has been concluded is NOT possible. A second consequence of this is that NO scientifical theory can be verified by an experiment. A theory may formulate a proposition on the outcome of a certain experiment. Even if the outcome of the experiment and the proposition are in good congruence it would be completely wrong to infere that the theory is correct due to the experiment. It is possible to harden the theory by experiments. For this purpose it is necessary to produce a big number of different and indpendend propositions based on the theory and test each single proposition with an experiment. The more propositions and the more experiments the chance that the theory is correct increases but note that even with an unbound number of propositions and experiments this is no proof of the theory. Interesting enough that you need ony a SINGLE experiment to falsify a theory if the outcome of the experiment is different from the theory's proposition. What can really be infered from experiments and observations may also be shown by the following joke: A physicist, a mathematician and a logician are sitting in a train riding through Germany. Suddenly they notice a herd of sheep whith all being white with the exception of one which is black. The physiscist: "That is a proof that there are black sheeps in Germay" The mathematician: "You physicists are using the term 'proof' in a too relaxed way. If at all this is a proof that there is at least ONE black sheep in Germany" The logician: "Let's get serious: This is a proof that there is at least ONE sheep in Germany with ONE BLACK SIDE". So, what the heck has this all to do with the tight pll discussion? One thing that I had to read in a time nuts mail of the last days was: >> It doesnt, it only appears to in a very >> restricted set of circumstances. > Bruce, I don't understand you, when presented > with visual evidence that this method works > you still deny it. . . >> That doesn't work as it has the wrong >> transfer function. > Again, it it does not work, how come the > evidence shows that it does, how do you > explain that Bruce? Due to the criteria explained above the term "evidence" is used here in a too far-ranging way. The experiment performed by John Miles is NOT a "experimentum diaboli" in the sense that the outcome of the experiment would enable us to decide whether Bruce's or Warren's theory about his implementation of the NIST tight pll method is correct. It is not because it has not falsified anything. As far as my limited understanding of the topic allows me to judge: The outcome of the experiment is not a direct antithesis to anything that Bruce has remarked and if I see it correct the outcome of the experiment is by no means contested by Bruce. However, if we want to check who's right and who's wrong with experiments, we need to know that we need a lot of experiments with different references and different DUTs. If all combinations of all DUTs and all references in the hands of time nuts would lead to equally well results as in John Miles's experiment, that would allow to conclude that the method works ok for all practical aspects of time nuts life (however without the guarantee for every future experiment outcome). Having not done these experiments yet who knows whether there is a falsifying experiment among the set of combinations? Best regards Ulrich Bangert www.ulrich-bangert.de Ortholzer Weg 1 27243 Gross Ippener _______________________________________________ time-nuts mailing list -- time-nuts@febo.com To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the instructions there.
MD
Magnus Danielson
Sat, Jun 5, 2010 11:07 AM

On 06/03/2010 02:15 PM, Ulrich Bangert wrote:

Gentlemen,

the discussion between Bruce and Warren concerning Warren's implementation
of NIST's "Tight PLL Method" has caused quite a stir in our group.

My scientifical knowledge about the discussed topic is so much inferior
compared to Bruce's one that I don't have the heart to enter a contribution
to the discussion itself. It may however be helpful to have a look at the
discussion from a "philosophy of science" point of view.

The most basic form of logic is the propositional logic.

I think even attempting propositional logic has a basic flaw, namely can
the tentative goal be reached at all?

In this case, can we get a "True Allan variance" measure?

The answer is simply no. We can't get it. We can get close to it thought.

First of all, the definition for Allan variance comes with a set of
assumptions. It assumes that dead-time is zero. If it is very near zero
(i.e. just a fraction of tau0), you will get values very near the true
Allan variance, and it may be handled using either the B2 or B3 bias
function. The bias functions was invented to translate a non-zero
dead-time measurement into a zero-dead-time measurement. To do this, the
dominant noise-form for the intended tau needs to be identified, this is
where reading NIST SP1065 becomes useful and actually very simple to
implement.

Second, the bandwidth of the measurement system needs to known and
documented with the measurement, as the WPM and FPM noise forms will
have Allan variance measures depending on the system bandwidth.

Third, the bandwidth limit itself is assumed to be far away from the
taus of interest, or else the traditional formulas for various
noiseforms is not valid.

Fourth, the slope of the system bandwidth is assumed to be brick-wall.
Again, for WPM and FPM noises, this will have a noticeable effect, but
the other noise forms will also be affected if they are too close the
limit. The theoretical formulas often replicated for the noise types
does not include include the slope tail, but is simply integrated over f
from 0 to f_H and then ignores the slope.

Fifth, the definition assumes an infinit average from minus infinity to
plus infinity. We can't wait that long and we just wasn't there to setup
the measurement to start with, we have to revert to statistical
estimators. Statistical estimators can then be biased (scale or offset
values) and have different efficiency in using the available data to
come arbitrarilly close to the true value, without reaching it.

Sixth, the definition assumes a system of no systematic drift,
environmental effects and such which will limit the measurement as it is
intended to be used for noise only.

Seventh, all measurements includes imperfections such as trigger jitter,
stability of reference(s), stability of circuit, non-linearity of
circuit, cross-talk, dependence on temperature, resolution, etc. etc.

... and as you probably got by now, I can keep going on.

So, the basic assumption of being able to get the "True" value is false,
so we have to revert to second best... close enought approximation. If
you look into the roots of Allan variance you will discover that it
forms a tentative base-case for a number of measurements, with many
strings attached to it. Additional details have been worked out over the
years. The field is complex and diversed.

I think one has to be humble when relating to "True Allan variance" in
that there will always be flaws in the data one has collected and the
methods one is using. One needs to be open-minded to see that regardless
of how I collect it, I need to be able to re-evaluate it, compare it and
essentially acknowledge "that it is to the best of my current
understanding". In this hunt for the unobtainable, trying to remove
error sources becomes a matter of art.

Cross-correlation gains is among the tricks in the hat we pull out to
get below some limits.

Cheers,
Magnus

On 06/03/2010 02:15 PM, Ulrich Bangert wrote: > Gentlemen, > > the discussion between Bruce and Warren concerning Warren's implementation > of NIST's "Tight PLL Method" has caused quite a stir in our group. > > My scientifical knowledge about the discussed topic is so much inferior > compared to Bruce's one that I don't have the heart to enter a contribution > to the discussion itself. It may however be helpful to have a look at the > discussion from a "philosophy of science" point of view. > > The most basic form of logic is the propositional logic. I think even attempting propositional logic has a basic flaw, namely can the tentative goal be reached at all? In this case, can we get a "True Allan variance" measure? The answer is simply no. We can't get it. We can get close to it thought. First of all, the definition for Allan variance comes with a set of assumptions. It assumes that dead-time is zero. If it is very near zero (i.e. just a fraction of tau0), you will get values very near the true Allan variance, and it may be handled using either the B2 or B3 bias function. The bias functions was invented to translate a non-zero dead-time measurement into a zero-dead-time measurement. To do this, the dominant noise-form for the intended tau needs to be identified, this is where reading NIST SP1065 becomes useful and actually very simple to implement. Second, the bandwidth of the measurement system needs to known and documented with the measurement, as the WPM and FPM noise forms will have Allan variance measures depending on the system bandwidth. Third, the bandwidth limit itself is assumed to be far away from the taus of interest, or else the traditional formulas for various noiseforms is not valid. Fourth, the slope of the system bandwidth is assumed to be brick-wall. Again, for WPM and FPM noises, this will have a noticeable effect, but the other noise forms will also be affected if they are too close the limit. The theoretical formulas often replicated for the noise types does not include include the slope tail, but is simply integrated over f from 0 to f_H and then ignores the slope. Fifth, the definition assumes an infinit average from minus infinity to plus infinity. We can't wait that long and we just wasn't there to setup the measurement to start with, we have to revert to statistical estimators. Statistical estimators can then be biased (scale or offset values) and have different efficiency in using the available data to come arbitrarilly close to the true value, without reaching it. Sixth, the definition assumes a system of no systematic drift, environmental effects and such which will limit the measurement as it is intended to be used for noise only. Seventh, all measurements includes imperfections such as trigger jitter, stability of reference(s), stability of circuit, non-linearity of circuit, cross-talk, dependence on temperature, resolution, etc. etc. ... and as you probably got by now, I can keep going on. So, the basic assumption of being able to get the "True" value is false, so we have to revert to second best... close enought approximation. If you look into the roots of Allan variance you will discover that it forms a tentative base-case for a number of measurements, with many strings attached to it. Additional details have been worked out over the years. The field is complex and diversed. I think one has to be humble when relating to "True Allan variance" in that there will always be flaws in the data one has collected and the methods one is using. One needs to be open-minded to see that regardless of how I collect it, I need to be able to re-evaluate it, compare it and essentially acknowledge "that it is to the best of my current understanding". In this hunt for the unobtainable, trying to remove error sources becomes a matter of art. Cross-correlation gains is among the tricks in the hat we pull out to get below some limits. Cheers, Magnus
SR
Steve Rooke
Sat, Jun 5, 2010 11:19 AM

So, at best, it's an estimate.

Steve

On 5 June 2010 23:07, Magnus Danielson magnus@rubidium.dyndns.org wrote:

On 06/03/2010 02:15 PM, Ulrich Bangert wrote:

Gentlemen,

the discussion between Bruce and Warren concerning Warren's implementation
of NIST's "Tight PLL Method" has caused quite a stir in our group.

My scientifical knowledge about the discussed topic is so much inferior
compared to Bruce's one that I don't have the heart to enter a
contribution
to the discussion itself. It may however be helpful to have a look at the
discussion from a "philosophy of science" point of view.

The most basic form of logic is the propositional logic.

I think even attempting propositional logic has a basic flaw, namely can the
tentative goal be reached at all?

In this case, can we get a "True Allan variance" measure?

The answer is simply no. We can't get it. We can get close to it thought.

First of all, the definition for Allan variance comes with a set of
assumptions. It assumes that dead-time is zero. If it is very near zero
(i.e. just a fraction of tau0), you will get values very near the true Allan
variance, and it may be handled using either the B2 or B3 bias function. The
bias functions was invented to translate a non-zero dead-time measurement
into a zero-dead-time measurement. To do this, the dominant noise-form for
the intended tau needs to be identified, this is where reading NIST SP1065
becomes useful and actually very simple to implement.

Second, the bandwidth of the measurement system needs to known and
documented with the measurement, as the WPM and FPM noise forms will have
Allan variance measures depending on the system bandwidth.

Third, the bandwidth limit itself is assumed to be far away from the taus of
interest, or else the traditional formulas for various noiseforms is not
valid.

Fourth, the slope of the system bandwidth is assumed to be brick-wall.
Again, for WPM and FPM noises, this will have a noticeable effect, but the
other noise forms will also be affected if they are too close the limit. The
theoretical formulas often replicated for the noise types does not include
include the slope tail, but is simply integrated over f from 0 to f_H and
then ignores the slope.

Fifth, the definition assumes an infinit average from minus infinity to plus
infinity. We can't wait that long and we just wasn't there to setup the
measurement to start with, we have to revert to statistical estimators.
Statistical estimators can then be biased (scale or offset values) and have
different efficiency in using the available data to come arbitrarilly close
to the true value, without reaching it.

Sixth, the definition assumes a system of no systematic drift, environmental
effects and such which will limit the measurement as it is intended to be
used for noise only.

Seventh, all measurements includes imperfections such as trigger jitter,
stability of reference(s), stability of circuit, non-linearity of circuit,
cross-talk, dependence on temperature, resolution, etc. etc.

... and as you probably got by now, I can keep going on.

So, the basic assumption of being able to get the "True" value is false, so
we have to revert to second best... close enought approximation. If you look
into the roots of Allan variance you will discover that it forms a tentative
base-case for a number of measurements, with many strings attached to it.
Additional details have been worked out over the years. The field is complex
and diversed.

I think one has to be humble when relating to "True Allan variance" in that
there will always be flaws in the data one has collected and the methods one
is using. One needs to be open-minded to see that regardless of how I
collect it, I need to be able to re-evaluate it, compare it and essentially
acknowledge "that it is to the best of my current understanding". In this
hunt for the unobtainable, trying to remove error sources becomes a matter
of art.

Cross-correlation gains is among the tricks in the hat we pull out to get
below some limits.

Cheers,
Magnus


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

--
Steve Rooke - ZL3TUV & G8KVD
The only reason for time is so that everything doesn't happen at once.

  • Einstein
So, at best, it's an estimate. Steve On 5 June 2010 23:07, Magnus Danielson <magnus@rubidium.dyndns.org> wrote: > On 06/03/2010 02:15 PM, Ulrich Bangert wrote: >> >> Gentlemen, >> >> the discussion between Bruce and Warren concerning Warren's implementation >> of NIST's "Tight PLL Method" has caused quite a stir in our group. >> >> My scientifical knowledge about the discussed topic is so much inferior >> compared to Bruce's one that I don't have the heart to enter a >> contribution >> to the discussion itself. It may however be helpful to have a look at the >> discussion from a "philosophy of science" point of view. >> >> The most basic form of logic is the propositional logic. > > I think even attempting propositional logic has a basic flaw, namely can the > tentative goal be reached at all? > > In this case, can we get a "True Allan variance" measure? > > The answer is simply no. We can't get it. We can get close to it thought. > > First of all, the definition for Allan variance comes with a set of > assumptions. It assumes that dead-time is zero. If it is very near zero > (i.e. just a fraction of tau0), you will get values very near the true Allan > variance, and it may be handled using either the B2 or B3 bias function. The > bias functions was invented to translate a non-zero dead-time measurement > into a zero-dead-time measurement. To do this, the dominant noise-form for > the intended tau needs to be identified, this is where reading NIST SP1065 > becomes useful and actually very simple to implement. > > Second, the bandwidth of the measurement system needs to known and > documented with the measurement, as the WPM and FPM noise forms will have > Allan variance measures depending on the system bandwidth. > > Third, the bandwidth limit itself is assumed to be far away from the taus of > interest, or else the traditional formulas for various noiseforms is not > valid. > > Fourth, the slope of the system bandwidth is assumed to be brick-wall. > Again, for WPM and FPM noises, this will have a noticeable effect, but the > other noise forms will also be affected if they are too close the limit. The > theoretical formulas often replicated for the noise types does not include > include the slope tail, but is simply integrated over f from 0 to f_H and > then ignores the slope. > > Fifth, the definition assumes an infinit average from minus infinity to plus > infinity. We can't wait that long and we just wasn't there to setup the > measurement to start with, we have to revert to statistical estimators. > Statistical estimators can then be biased (scale or offset values) and have > different efficiency in using the available data to come arbitrarilly close > to the true value, without reaching it. > > Sixth, the definition assumes a system of no systematic drift, environmental > effects and such which will limit the measurement as it is intended to be > used for noise only. > > Seventh, all measurements includes imperfections such as trigger jitter, > stability of reference(s), stability of circuit, non-linearity of circuit, > cross-talk, dependence on temperature, resolution, etc. etc. > > ... and as you probably got by now, I can keep going on. > > So, the basic assumption of being able to get the "True" value is false, so > we have to revert to second best... close enought approximation. If you look > into the roots of Allan variance you will discover that it forms a tentative > base-case for a number of measurements, with many strings attached to it. > Additional details have been worked out over the years. The field is complex > and diversed. > > I think one has to be humble when relating to "True Allan variance" in that > there will always be flaws in the data one has collected and the methods one > is using. One needs to be open-minded to see that regardless of how I > collect it, I need to be able to re-evaluate it, compare it and essentially > acknowledge "that it is to the best of my current understanding". In this > hunt for the unobtainable, trying to remove error sources becomes a matter > of art. > > Cross-correlation gains is among the tricks in the hat we pull out to get > below some limits. > > Cheers, > Magnus > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to > https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. > -- Steve Rooke - ZL3TUV & G8KVD The only reason for time is so that everything doesn't happen at once. - Einstein
MD
Magnus Danielson
Sat, Jun 5, 2010 11:58 AM

On 06/05/2010 01:19 PM, Steve Rooke wrote:

So, at best, it's an estimate.

Yes.

How good it is, how fast you get it, how much you pay for it and how
much effort it is to get and operate is the issue.

Getting accurate measurements is hard to prove actually. Getting
sufficiently good relative measurements (for money, effort etc) is
easier most of the times.

So what we have to do is to study various forms of impairments, learn
their effects, learn how to deal with them and learn how various
approaches have benefits and defficiencies. The deeper I study this, and
the more of the things I have initially ignored but forced myself to
follow up, the more complex the issue becomes and things comes in a
different light. You get humbled by learning just how little you knew as
you learn more. It is a time-consuming effort, but I hope some of it
pays of in my contributions to the Allan variance article on Wikipedia.
I have still not delivered a complete view from the things I have
learned recently, even if I hint some of it. There is a number of
statements in there which is unsatisfactory in that they do not have a
complete inline reference, but I think I have done a fairly god job so far.

Few people seems to estimate the values of h_-2, h_-1, h_0, h_1 and h_2
even if both phase noise and time/frequency difference data may be used
for it.

Also, modern cheap programmable TCXOs break the model as they have a
hump in the phase noise due to their locked PLL, which the original
model does not allow for. The autocorrelation function will be quite
different. Notice how this ripples over to other locked oscillators such
as passive masers, GPSDO etc.

There is still basic research to be done and basic research to be
recovered from the archives.

Cheers,
Magnus

On 06/05/2010 01:19 PM, Steve Rooke wrote: > So, at best, it's an estimate. Yes. How good it is, how fast you get it, how much you pay for it and how much effort it is to get and operate is the issue. Getting accurate measurements is hard to prove actually. Getting sufficiently good relative measurements (for money, effort etc) is easier most of the times. So what we have to do is to study various forms of impairments, learn their effects, learn how to deal with them and learn how various approaches have benefits and defficiencies. The deeper I study this, and the more of the things I have initially ignored but forced myself to follow up, the more complex the issue becomes and things comes in a different light. You get humbled by learning just how little you knew as you learn more. It is a time-consuming effort, but I hope some of it pays of in my contributions to the Allan variance article on Wikipedia. I have still not delivered a complete view from the things I have learned recently, even if I hint some of it. There is a number of statements in there which is unsatisfactory in that they do not have a complete inline reference, but I think I have done a fairly god job so far. Few people seems to estimate the values of h_-2, h_-1, h_0, h_1 and h_2 even if both phase noise and time/frequency difference data may be used for it. Also, modern cheap programmable TCXOs break the model as they have a hump in the phase noise due to their locked PLL, which the original model does not allow for. The autocorrelation function will be quite different. Notice how this ripples over to other locked oscillators such as passive masers, GPSDO etc. There is still basic research to be done and basic research to be recovered from the archives. Cheers, Magnus
J
jimlux
Sat, Jun 5, 2010 1:07 PM

Magnus Danielson wrote:

Also, modern cheap programmable TCXOs break the model as they have a
hump in the phase noise due to their locked PLL, which the original
model does not allow for. The autocorrelation function will be quite
different. Notice how this ripples over to other locked oscillators such
as passive masers, GPSDO etc.

Yes.. once one moves beyond a simple oscillator/resonator and amplifier,
you're out of the zone where the simple Leeson model will work all the
time.  the curves have lumps and bumps, and simple approximations of
integration don't work any more.

And then, you have the whole explaining "why do I care what the phase
noise/Allan deviation is" or trying to relate noise performance to
overall system performance.  (e.g. what happens if the phase noise at
1MHz offset is 20 dB worse than expected?)

For some simple cases, blackboard sketches of reciprocal mixing and such
help, but when it gets more complex... or when you're trying to relate
an integrated phase jitter spec to the distribution that creates it....

Magnus Danielson wrote: > > Also, modern cheap programmable TCXOs break the model as they have a > hump in the phase noise due to their locked PLL, which the original > model does not allow for. The autocorrelation function will be quite > different. Notice how this ripples over to other locked oscillators such > as passive masers, GPSDO etc. > > Yes.. once one moves beyond a simple oscillator/resonator and amplifier, you're out of the zone where the simple Leeson model will work all the time. the curves have lumps and bumps, and simple approximations of integration don't work any more. And then, you have the whole explaining "why do I care what the phase noise/Allan deviation is" or trying to relate noise performance to overall system performance. (e.g. what happens if the phase noise at 1MHz offset is 20 dB worse than expected?) For some simple cases, blackboard sketches of reciprocal mixing and such help, but when it gets more complex... or when you're trying to relate an integrated phase jitter spec to the distribution that creates it....
SR
Stanley Reynolds
Sat, Jun 5, 2010 4:12 PM

I have no problem with strong points of view, in some ways it increases my enthusiasm for the topic. The medium of email does have it's limits, but why censor or ignore the discussion if it includes these indications of a strong belief in ones view ?  We have many "dry" papers to read please don't censor your self because it may upset some one, but try to express your self as well as your view point. We all are responsible for our own feelings, but not everyone's. As to the current discussion it still has value for me, I thank all that have advanced my understanding.

Stanley

I have no problem with strong points of view, in some ways it increases my enthusiasm for the topic. The medium of email does have it's limits, but why censor or ignore the discussion if it includes these indications of a strong belief in ones view ?  We have many "dry" papers to read please don't censor your self because it may upset some one, but try to express your self as well as your view point. We all are responsible for our own feelings, but not everyone's. As to the current discussion it still has value for me, I thank all that have advanced my understanding. Stanley
SR
Steve Rooke
Mon, Jun 7, 2010 1:36 PM

There is much to learn and there will always be much to learn, we only
have to look at history for examples of this. Providing we never loose
understanding of this point, our path to enlightenment will always be
open.

Steve

On 5 June 2010 23:58, Magnus Danielson magnus@rubidium.dyndns.org wrote:

On 06/05/2010 01:19 PM, Steve Rooke wrote:

So, at best, it's an estimate.

Yes.

How good it is, how fast you get it, how much you pay for it and how much
effort it is to get and operate is the issue.

Getting accurate measurements is hard to prove actually. Getting
sufficiently good relative measurements (for money, effort etc) is easier
most of the times.

So what we have to do is to study various forms of impairments, learn their
effects, learn how to deal with them and learn how various approaches have
benefits and defficiencies. The deeper I study this, and the more of the
things I have initially ignored but forced myself to follow up, the more
complex the issue becomes and things comes in a different light. You get
humbled by learning just how little you knew as you learn more. It is a
time-consuming effort, but I hope some of it pays of in my contributions to
the Allan variance article on Wikipedia.
I have still not delivered a complete view from the things I have learned
recently, even if I hint some of it. There is a number of statements in
there which is unsatisfactory in that they do not have a complete inline
reference, but I think I have done a fairly god job so far.

Few people seems to estimate the values of h_-2, h_-1, h_0, h_1 and h_2 even
if both phase noise and time/frequency difference data may be used for it.

Also, modern cheap programmable TCXOs break the model as they have a hump in
the phase noise due to their locked PLL, which the original model does not
allow for. The autocorrelation function will be quite different. Notice how
this ripples over to other locked oscillators such as passive masers, GPSDO
etc.

There is still basic research to be done and basic research to be recovered
from the archives.

Cheers,
Magnus


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

--
Steve Rooke - ZL3TUV & G8KVD
The only reason for time is so that everything doesn't happen at once.

  • Einstein
There is much to learn and there will always be much to learn, we only have to look at history for examples of this. Providing we never loose understanding of this point, our path to enlightenment will always be open. Steve On 5 June 2010 23:58, Magnus Danielson <magnus@rubidium.dyndns.org> wrote: > On 06/05/2010 01:19 PM, Steve Rooke wrote: >> >> So, at best, it's an estimate. > > Yes. > > How good it is, how fast you get it, how much you pay for it and how much > effort it is to get and operate is the issue. > > Getting accurate measurements is hard to prove actually. Getting > sufficiently good relative measurements (for money, effort etc) is easier > most of the times. > > So what we have to do is to study various forms of impairments, learn their > effects, learn how to deal with them and learn how various approaches have > benefits and defficiencies. The deeper I study this, and the more of the > things I have initially ignored but forced myself to follow up, the more > complex the issue becomes and things comes in a different light. You get > humbled by learning just how little you knew as you learn more. It is a > time-consuming effort, but I hope some of it pays of in my contributions to > the Allan variance article on Wikipedia. > I have still not delivered a complete view from the things I have learned > recently, even if I hint some of it. There is a number of statements in > there which is unsatisfactory in that they do not have a complete inline > reference, but I think I have done a fairly god job so far. > > Few people seems to estimate the values of h_-2, h_-1, h_0, h_1 and h_2 even > if both phase noise and time/frequency difference data may be used for it. > > Also, modern cheap programmable TCXOs break the model as they have a hump in > the phase noise due to their locked PLL, which the original model does not > allow for. The autocorrelation function will be quite different. Notice how > this ripples over to other locked oscillators such as passive masers, GPSDO > etc. > > There is still basic research to be done and basic research to be recovered > from the archives. > > Cheers, > Magnus > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to > https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. > -- Steve Rooke - ZL3TUV & G8KVD The only reason for time is so that everything doesn't happen at once. - Einstein