Monday, 11 December 2017

Toomas Karmo: Part U: Philosophy of Perception, Action, and "Subjectivity"

Quality assessment:

On the 5-point scale current in Estonia, and surely in nearby nations, and familiar to observers of the academic arrangements of the late, unlamented, Union of Soviet Socialist Republics (applying the easy and lax standards Kmo deploys in his grubby imaginary "Aleksandr Stepanovitsh Popovi nimeline sangarliku raadio instituut" (the "Alexandr Stepanovitch Popov Institute of Heroic Radio") and his  grubby imaginary "Nikolai Ivanovitsh Lobatshevski nimeline sotsalitsliku matemaatika instituut" (the "Nicolai Ivanovich Lobachevsky Institute of Socialist Mathematics") - where, on the lax and easy grading philosophy of the twin Institutes, 1/5 is "epic fail", 2/5 is "failure not so disastrous as to be epic", 3/5 is "mediocre pass", 4/5 is "good", and 5/5 is "excellent"): 4/5. Justification: Kmo had time to develop the necessary points to reasonable length.


Revision history:

All times in these blog "revision histories" are stated in UTC (Universal Coordinated Time/ Temps Universel Coordoné,  a precisification of the old GMT, or "Greenwich Mean Time"), in the ISO-prescribed YYYYMMDDThhmmZ timestamping format. UTC currently leads Toronto civil time by 5 hours and currently lags Tallinn civil time by 2 hours. 

  • 20171214T1740Z/version 3.3.0: Kmo added remarks in a couple of places on the STOS, now making it clear (he had previously overlooked this) both (a) that this week's detailed STOS construction is more elaborate than it has to be, and (b) that the elaborate construction does secure an expository advantage, by making the 0s specially and dramatically sparse within the STOS's so-heavy population of 1s. - Kmo reserved the right to make further tiny, nonsubstantive, purely cosmetic, tweaks over the coming 48 hours, as here-undocumented versions 3.3.1, 3.3.2, 3.3.3, ... . 
  • 20171213T2039Z/version 3.2.0: Kmo, in further checking, corrected what was in essence an error of substance, in the final words he put into the mouth of the Crown Prosecution during the trial of the Mathematical Logic Three. (It is not that the Mathematical Logic Three could be reasonably be asked for a proof that no IT-orderly sequence Olivia exists. No: the barrister must, rather, aver that they could reasonably be asked for a proof that no condition-conformant IT-orderly sequence exists! One thing we do know by now is that there exist IT-orderly sequences. The STOS, at any rate, is such a sequence.) - Kmo reserved the right to make tiny, nonsubstantive, purely cosmetic, tweaks over the coming 48 hours, as here-undocumented versions 3.2.1, 3.2.2, 3.2.3, ... . 
  • 20171213T2023Z/version 3.1.0: Kmo, in checking, found a silly error of substance. He had written, in his proof that there are infinitely many primes, "So either G + 1 is not itself prime, or" where what was needed was, rather, "So either G + 1 is itself prime, or". - Kmo reserved the right to make tiny, nonsubstantive, purely cosmetic, tweaks over the coming 48 hours, as here-undocumented versions 3.1.1, 3.1.2, 3.1.3, ... .  
  • 20171213T1953Z/version 3.0.0: Kmo finished uploading a version in coherent full-sentences prose. He reserved the right to make tiny, nonsubstantive, purely cosmetic, tweaks over the coming 48 hours, as here-undocumented versions 3.0.1, 3.0.2, 3.0.3, ... . 
  • 20171212T2042Z/version 2.0.0: Kmo managed to upload a duly polished finegrained outline. (To his alarm, he had found in preparing it that he needed  to correct some of the thinking that went even into the coarsegrained outline.) He now proceeded to the first few steps in the approximately 3-hour task which is the conversion of such an outline into coherent full-sentences prose. He imagined that he would under pressure of other duties have to down tools by UTC=20171212T2130Z or so, and that he would manage to complete the job only over the coming 24 hours. Oi veh, oy vey zmir, oi veh gewalt.
  • 20171212T1557Z/version 1.0.0: Kmo, struggling a bit with ideas, and so running rather slowly, managed to upload a coarsegrained outline. He hoped over the coming 3 hours to finish converting this into a finegrained outline, and thereupon to start converting his work into coherent full-sentences prose.


[CAUTION: A bug in the blogger server-side software has in some past months shown a propensity to insert inappropriate whitespace at some points in some of my posted essays. If a screen seems to end in empty space, keep scrolling down. The end of the posting is not reached until the usual blogger "Posted by Toomas (Tom) Karmo at" appears. - The blogger software has also shown a propensity, at any rate when coupled with my erstwhile, out-of-date, Web-authoring uploading browser, to generate HTML that gets formatted in different ways on different downloading browsers. Some downloading browsers have sometimes perhaps not correctly read in the entirety of the "Cascading Style Sheets" (CSS) which on all ordinary Web servers control the browser placement of margins, sidebars, and the like. If you suspect CSS problems in your particular browser, be patient: it is probable that while some content has been shoved into some odd place (for instance, down to the bottom of your browser, where it ought to appear in the right-hand margin), all the server content has been pushed down into your browser in some place or other. - Finally, there may be blogger vagaries, outside my control, in font sizing or interlinear spacing or right-margin justification. - Anyone inclined to help with trouble-shooting, or to offer other kinds of technical advice, is welcome to write me via Toomas.Karmo@gmail.com.]

I have been slowed down this week not by struggles with emotions (in my particular often-recurring case, this involves depression) but by struggles with ideas. Now, at last, on a Tuesday and a Wednesday rather than on a Monday, I feel able to advance from messy multicoloured pencil jottings at the handwriting desk to the mechanics of a coarsegrained blogspot outline, a finegrained blogspot outline (itself departing, alas, nontrivially in some of its ideas from the lately so-solid-seeming coarsegrained blogspot outline), and coherent full-sentences blogspot prose. - The outlining, I should perhaps remark for some conceivable, hypothetical, reader's hypothetical benefit, is made in the spirit of my 2003 essay "Logic Blueprints Take the Stress out of Writing Nonfiction", on my Web page http://www.metascientia.com/PNNN____lit/PNNN____logic_blueprints.html

The struggle with ideas included the worried realization, at a couple of serendipitous moments away from handwriting desk and keyboard, both on Monday of this week and on Tuesday of this week, that while I had been duly pessimistic last week regarding Richard von Mises, I may have failed last week to be in a due-diligence way adequately agnostic regarding Martin-Löf with Schnorr and Levin (henceforth "M-L/S/L" - or, as I shall soon also say in a courtroom spirit, the "Mathematical Logic Three"). 

It will be recalled that the M-L/S/L condition, as I have at any rate managed in an imprecise way to grasp it, is the following: "Infinite bit sequence Roger is such that Roger's successive finite initial segments eventually become, and thereafter forever remain, Kolmogorov-disorderly". This is, to be sure, on my confessedly superficial and imprecise understanding, based only on my rapid glance at encyclopedia materials. 

I still think I was right, in discussing the M-L/S/L condition here in "Part T" last week, to reject it as a necessary condition of randomness. Last week, I sketched the construction of a rather sinister infinite sequence Sam that fails the M-L/S/L condition, and yet does sink so low as to be random. My suggested Sam sank to that depth through featuring randomly timed "barbarian incursions" of 0s, disagreeably interrupting his long PAX ROMANA stretches of 1s. Those barbarian incursions proved so long as to render some of Sam's successive finite initial segments Kolmogorov-orderly, with his bouts of finite-initial-segment Kolmogorov-orderliness therefore never ceasing for good. 

But now I think that I was wrong in "Part T" to advance the rather confident countervailing suggestion that the M-L/S/L, or "Mathematical Logic Three", condition is at least sufficient for (in other words, is at least a guarantee of) randomness. A more correct decision would have been the reserving of judgement, in an outright agnosticism, in which I would most fulsomely stress my ignorance concerning (perhaps mission-decisive?) technical details. It could be that an infinite bit sequence "Olivia", whose finite initial segments eventually become, and forever thereafter remain, Kolmogorov-disorderly, might nevertheless rise to the modest height of IT-orderlinesss. It is not that the Olivia I am about to cite rises to that magnificent "shimmering height" which is T-orderliness. Nyet/нет, tovarishch Polkovnik. But for mere IT-orderliness, conceivably da/да - at least, "conceivably" enough to make a circumspect agnosticism, rather than a brash recommendation of M-L/S/L as sufficient-for-randomness, the appropriate stance.

And indeed I am going to suggest here that far from having to construct some ingenious new Olivia, we might already possess one - namely, the now-familiar IT-orderly "Specially Troubling Initial Sequence", or STOS, as sketched in "Part R" (from 2017-11-20 or 2017-11-21). 

****

Before going too far, it is advisable for me to specify the STOS in more detail than I presented in the "Part R" sketch. To my grief, I now find I did not set up my Tidy Mapping, on which my STOS relies, coherently. I made the mistake of naively specifying "dictionary order" in building the Tidy Mapping. The Tidy Mapping is not coherent unless it results in a straightforward listing of all Turing machines, in set-theory technical terms a listing of "order type omega" rather than of "order type omega plus omga plus omega plus ...".  On a dictionary ordering of words in some language whose words are permitted to attain any finite length, we might, alas, already have an infinite number of entries starting with the first letter. (In English-dictionary terms, there might already be an infinite number of words starting with "A" - with ABE, for instance, eventually followed by ABELARD, followed by ABELARDIAN, and so on forever, as the individually finite-length A-words get longer and longer. Actual natural-language dictionaries are secure against such explosions-into-foreverland only because the total number of words starting with any one given letter is finite.)

So here is a repair in my hitherto inept "Tidy Mapping". The Turing machines are to be enumerated in a multi-volume dictionary, in which the alphabetically ordered, Turing-machine-encoding, words of just one letter (if there are any) make up the (finite) Volume 1, and the alphabetically ordered, Turing-machine-encoding, words of just two letters (if there are any) make up the (finite) Volume 2, and so on. Then every Turing machine is listed exactly once, one some page or other of some volume or other, with each volume (I reiterate) managing to be finite. (The individual volumes are finite because the underlying alphabet is itself finite.)

With this repair made, we do, at long last, possess a coherently specified "Tidy Mapping", mapping the entire set of Turing machines one-to-one  (not just into, but, more strongly, onto) the positive integers. For each positive integer i, i is paired with Machine X if and only if X is the ith machine to be mentioned in the infinitely many finite-length volumes of the dictionary, or Grand Multi-Volume Listing of All Turing Machines.

We next note the convenient school-mathematics fact that the primes are an infinite subset of the positive integers. (For suppose, per absurdum, that the positive integers harbour a greatest prime, say P. Then form the "Grand Product", G, of all the primes. G is a very large number, namely the product 2 x 3 x 5 x 7 x 11 x 13 x 17 x ... x P.   Consider, next, the number G + 1. On our supposition, G + 1 cannot be prime, since it is strictly greater than P.  And yet 2 does not divide evenly into G + 1, since the attempted division leaves a remainder of 1. Likewise, 3 does not divide evenly into G + 1, since the attempted division leaves a remainder of 1, We proceed in this spirit for each prime in the supposedly complete enumeration 2, 3, 5, 7, 11, 13, 17, 19, ... , P, finding in each and every case that the attempted division leaves a remainder of 1. So either G + 1 is itself prime, or else it has some prime factor smaller than itself and  nevertheless greater than P. In either case, P fails to be the last of the primes.)

Consider, then, from the overall Tidy Mapping of Turing Machines 1-to-1 onto the positive integers, just those positive integers f such that f denotes a Turing machine that runs forever when started on an all-blanks tape. Let those various special positive integers be f-sub-1, f-sub-2, f-sub-3, ... . (They must themselves be an infinite collection,. If they were a finite collection, then some Turing machine would solve the Halting Problem.) Call the successive primes p-sub-1, p-sub-2, p-sub-3, ...  (so, for instance, p-sub-1 is 2, and p-sub-2 is 3, and p-sub-3 is 5, and p-sub-6 is 13). Form the STOS, as an infinite sequence overwhelmingly dominated by 1s, with only the exceedingly rare 0 (except that there is no last 0) as follows:

  • The STOS starts with an unbroken run of exactly (p-sub-1)-raised-to-the-power-(f-sub-1) 1s. 
  • The STOS then has exactly one 0
  • Next, the STOS has an unbroken run of exactly (p-sub-2)-raised-to-the-power-(f-sub-2) 1s. 
  • The STOS then has exactly one 0
  • Next, the STOS has an unbroken run of exactly (p-sub-3)-raised-to-the-power-(f-sub-3) 1s.
  • (And so on,.) 
On a naive and superficial inspection, the STOS looks orderly, since its infinitely many 0s seem so rare within any finite initial STOS-segment. They in fact become so rare that for practical purposes we hardly ever see them. Here we have not Rome under dramatic onslaught from randomly timed barbarian invasions of increasing length, as with last week's Sam, but rather the pleasant-looking world of such well-bred pre-war English detectives as G.K. Chesterton's "Father Brown". Villains are present indeed, and yet they are very much lurking, generally very hard to see, amid the abounding rhododendrons or topiaries of the pre-war English Country House.

And the STOS (to reiterate a point already much stressed) is orderly - just not in the naively Turing sense. In place of being naively T-orderly, it is, more subtly, IT-orderly.

It might fairly be objected there that I have now made the STOS more elaborate than it had to be. (I neglected the objection in all version of this prior to version 3.3.0.)  It would have been enough to construct the STOS as follows, avoiding my intricate detour into the sequence p-sub-1, p-sub-2, p-sub-3, ... of primes: 

  • The STOS starts with an unbroken run of exactly f-sub-1 1s. 
  • The STOS then has exactly one 0
  • Next, the STOS has an unbroken run of exactly f-sub-2 1s. 
  • The STOS then has exactly one 0
  • Next, the STOS has an unbroken run of exactly f-sub-3 1s.
  • (And so on,.)
But there is an expository advantage in building up the STOS in this week's particular way. On this week's specially elaborate construction, it becomes specially vivid that it is the 1s that predominate. The issues to be debated in my upcoming courtroom trial of the "Mathematical Logic Three" (a little later in this present posting) consequently emerge with a special vividness. 

****

Before going too far, it is also advisable for me to make a few remarks on variants of the Halting Problem.

A Turing machine functions as a fork with two tines. On the one hand is the overwriteable, so very mutable, infinite tape.  This is used as input in cases where input is desired, as output in cases where output is desired, and in essentially all scenarios as scratchpad memory. On the other hand is the immutable finite internal program table. In searching for special, solvable, cases of the Halting Problem, it seems appropriate to direct attention to both tines.

I gather this week from a hasty glance at some Web resource such as Wikipedia that some work has been done in relation to the first-mentioned tine. People have been considering Turing-like machines which run a merely finite tape. The literature somehow describes this in terms of a tape with a pair of "end markers". Perhaps the literature term "Linear Bounded Automaton" is also somehow relevant. I gather that on some definition of a Turingish-machine-with-finite-tape, the Halting Problem has been found in some sense machine-soluble.

What, now, about the second tine? What happens if we confine attention to Turing machines with fewer than some bounding number u ("u" for "upper bound") of rows in the internal program table? In terms of the Turing-Davis formalism I introduced in "Part R", what happens if we confine attention to machines defined by a sequence of fewer than u 5-tuples? Could there be a Turing machine H-sub-u which decides, in a finite number of steps, whether any Turing machine of table-of-5-tuples length strictly less than u eventually halts when started on an all-blanks tape?

I would be grateful, both in the context of the present discussion and in the interest of a general improved understanding, at my so-inadequate-desk, of mathematical logic, to learn from someone whether this second-tine problem has been studied. As always, e-mails should be sent to Toomas.Karmo@gmail.com. One can of course also reply directly to this blog, under the blogspot commenting formalism, provided one respects the rules I set out in my first posting, back in 2016 April. (The rules provide that responses shall be published verbatim, whether or not I agree with them, provided that there is nothing like obscenity or libel, and provided also that posters specify in the body of their posting their true name, their municipality and nation of residence, and one of their duly functioning e-mail addresses.)

****

The best I can now offer, labouring under my various intellectual limitations,  is a mock trial, in which a defending barrister argues for L-M/S/L (they stand, quivering in the courtroom dock, as the "Mathematical Logic Three"), and a Crown Prosecutor argues against them. The Defence has to suggest that the conceded orderliness in the STOS is accompanied by, happily, a failure of the condition proffered by the Mathematical Logic Three as guaranteeing randomness. The Prosecution, or "Crown", has to suggest that the proffered condition either holds for the (IT-orderly) STOS or holds for some tricky, and yet (IT-) orderly, Olivia other than the STOS, and that therefore the proffered condition is no guarantee of randomness.

In criminal trials, it is perhaps normal for the prosecution to open. The reason for this is in turn perhaps that it is the prosecution which is considered to carry the burden of proof. Where the burden of proof lies in the trial of the Mathematical Logic Three, I dare not say. For reasons of mere literary convenience, I do choose to have the Defence open proceedings.

****

The Defence begins in a commonsense way, by remarking that 0s are very rare indeed in the STOS (most especially as it has this week been specified, with an excursion into the infinite progression of primes p-sub-1, p-sub-2, ...), and become even progressively rarer as the inspected finite initial STOS-segments get progressively longer. Maybe this banal fact is enough, avers the Defence, eventually to make every finite initial segment compactly computable, in other words Kolmogorov-orderly. Since 1s are overwhelmingly predominant, it suffices (so this opening Defence argument runs) to direct any finite-initial-STOS-segment-generating Turing machine into a counted 1-writing loop, for the overwhelming greater part of its computation. Surely (the suggestion proceeds) the machine's internal program table will thereby be overwhelming shorter than the finite initial STOS-segment the machine is is tasked with writing.

But here much will turn on the precise technical meaning assigned by the Mathematical Logic Three to "overwhelmingly more". And I, alas, as a feeble amicus curiae, am out of my depth. All I can say is that if my argument from last week, regarding the randomness of barbarian-troubled Sam, is a good one, then correspondingly the present Defence suggestion (namely, that the so-sparse-in-0s, and IT-orderly, STOS has the good fortune to fail the proffered M-L/S/L guarantee-of-randomness) looks good. However we proceed (say I, in my modest amicus curiae capacity), we should at least couple an appraisal of this week's dramatically-0-sparse, and yet orderly, prime-numbers-steered STOS construction with an appraisal of last week's dramatically 0-rich, and as I last week thought and this week still think random, Sam construction.) 

This is not quite all from the Defence. After the court's luncheon recess, the Defence additionally remarks that some relief might be had from the above-sketched "second-tine" variant on the Halting Problem. The so-rare 0s in the STOS help denote (so runs this additional, post-prandial, argument) a sequence of Turing machines which is in a happy way ordered. First comes the encoding,  f-sub-1 (by way of an unbroken string of exactly p-sub-1-to-the-power-f-sub-1 1s), of a short running-forever Turing machine. Next comes the encoding, f-sub-2 (by way of a string of exactly p-sub-2-to-the-power-f-sub-2 1s), of a no-shorter running-forever Turing machine. The pattern continues without end. The successively denoted running-forever Turing machines form an endless parade, says the defending barrister, in which we never witness a forever-running machine with a table of some given length followed by a forever-running machine with a shorter table. (The table lengths of the individual successive forever-running machines in the endless parade, in other words, exhibit monotonic increase, even while perhaps not exhibiting strictly monotonic increase.)

If, the Defence now pleads (luncheon is well digested; tea-time approaches), for every positive integer u the Halting Problem can be Turing-solved for all Turing machines featuring an internal table of length less than u, then further relief is in prospect. Maybe finite initial segments of the STOS can be computed rather compactly. They can be computed (so concludes this post-prandial suggestion) not by the above-mentioned, pre-prandial, commonsense expedient ot sending their generating machines into long 1-writing counted loops, but in a mathematically more sophisticated way, by having their successive generating machines solve successive versions, for a monotonicially increasing positive-integers succession u, u-prime, u-prime-prime, ...,  of the "second tine" limited Halting Problem.

****

The Crown (perhaps on Day Two in the trial against the Mathematical Logic Three, after breakfast) might make much of the fact that "overwhelmingly more" is spelled out in perilous technical detail by the Mathematical Logic Three. Perhaps their spelling-out will trigger their downfall, by making it false that the successively initial segments of the STOS eventually are "overwhelmingly" longer than the program tables of the successive machines successively generating them - whether these machines proceed naively, with counted 1-writing loops, or even in a more sophisticated way, on the strength of some solution to the second-tine limited Halting Problem?

And, as I have already remarked, the Crown can raise the spectre that even if the STOS has all its finite initial segments eventually becoming Kolmogorov-orderly, nevertheless some new IT-orderly Olivia, more cunning than the IT-orderly STOS, could succeed in satisfying the perilously proffered guarantee-condition of randomness. Perhaps the Crown will put on a bit of grumpy rhetoric, clutching its lapels as it growls: Just as, Ladies and Gentlemen of the Jury, we would appreciate a constructive proof of the allegation that some condition-conformant infinite sequence does exist - this, Ladies and Gentlemen of the Jury, was the wish so reasonably expressed by 'Doctor Karmo' last week, in 'Part T'", when he timidly hoped for a construction or a sketch-of-a-construction - so we really would, Ladies and Gentlemen, now appreciate a rigorous proof that no IT-orderly condition-conformant sequence, no fiendishly cunning condition-conformant "Olivia" of any IT-orderly kind, can exist.

****

This is as far as I can safely go. At the moment, the verdict has to remain unknown, a circumspect agnosticism being my sole safe stance.

****

With this week's allotment of blogging time running out, I must finish. The homework assigned last week will have to be discussed in the next installment - perhaps next week, perhaps (since I should also perhaps try to blog on something like the theology of Christmas, or on Gregorian chant) only once Christmas is past.

Having already today, however, made some repairs to previous writeups, I should also make something of an improvement in the homework question as set last week. Let the reader draw a big black-ink rectangle, for the (perhaps empty, perhaps nonempty) set of all random infinite bit sequences (call it, this week and henceforth, "Randomia"). As in last week's presentation of this particular homework, let "Greenland" be that subset (perhaps empty, perhaps nonempty) of Randomia which comprises the infinite bit sequences S such that every T-selected infinite subsequence of S is random. As in last week's presentation of this particular homeowork, let "Redland" be that subset (perhaps empty, perhaps nonempty) of Randomia which comprises the infinite bit sequences S such that every IT-selected infinite subsequence of S is random. As in last week's presentation, call the (perhaps empty, perhaps nonempty) intersection of Redland and Greenland "Yule Country".

But now (this is what I in my unfeigned and shocking idiocy overlooked last week) give a name to that part of Randomia which is neither in Redland nor in Greenland. Since it is associated with no rainbow colour at all, call it "Drabland".

Use the terms "Extreme Redland" and "Extreme Greenland" in the same way as last week.

Randomia is thus partitioned into four mutually exclusive, jointly exhaustive subsets, namely Extreme Greenland ("eG"), Extreme Redland ("eR"), Yule Country ("Y"), and Drabland ("D"). Redland is, now as already last week, the union of eR and Y. Greenland is, now as already last week,  the union of eG and Y.

Drawing Randomia myself with black ink, and then wielding my trusty erasable green and red pencils for the overlapping Greenland and Redland, and finally for visual vividness putting a dashed erasable-blue-pencil outline around Yule Country and a dashed black-pencil outline around Drabland, I get the following:





(The photo can, as is usual in blogger publishing on blogspot, be enlarged with a mouse-click. In my particular drawing, Redland occupies the upper half of Randomia, and Greenland the left half of Randomia.)

Formally, we now have exactly 16 sets to play with. At one logical extreme is Randomia. At another logical extreme is the empty set. In addition, there are the following fourteen sets:

  1. eG
  2. eR
  3. Y
  4. D
  5. union of eG and eR
  6. union of eG and Y
  7. union of eG and D
  8. union of eR and Y
  9. union of eR and D
  10. union of Y and D 
  11. union of eR, Y, and D (complement in Randomia of eG)
  12. union of eG, Y, and D (complement in Randomia of eR)
  13. union of eG, eR, and D (complement in Randomia of Y)
  14. union of eG, eR, and Y (complement in Randomia of D) 
The homework is then the following: Upon conceding, for the same of argument, the (in fact possibly contestable) assumption that Randomia is nonempty, what, if anything, can then be asserted - or perhaps can then somehow more plausibly or less plausibly be conjectured - regarding the emptiness or nonemptiness of these fourteen assorted sets? How light or grave, in other words, is the agnosticism under which we labour in the fourteenfold taxonomy, under the provisional governing assumption that Randomia is nonempty?

****

Before completely finishing for this week, I may as well direct attention to the availability of Web-based Turing-machine simulators, running displayed sample program tables, with the simulation authors in some or all cases even inviting the public to load and run their own program tables. (One can Google on, for example, the string  turing machine simulator.) One or more of these simulators might help make some of the ideas in recent installments vivid for some readers - possibly including those readers for whom Turing machines are somewhat novel, and whose appetites have been whetted by a visit to the "Mike Davey" YouTube clip which I mentioned at the end of "Part S" (on 2017-11-29 or so).

I am averse to spending much of my own scanty time exploring the various available simulators and comparing their respective merits. Some reader, however, may want to take on the task. To help such a hypothetical reader, I list, in an aprioristic spirit, some a-priori-reasonable tests of simulator merit:

  • Does the offered simulator in some way visually highlight the particular row of its program table that it is at any particular instant executing? 
  • Does the offered simulator make it possible to step manually, row by weary row, through a program table, upon repeatedly pressing something like a simulated doorbell-button? 
  • When manual stepping is turned off, does the offered simulator present a choice of running speeds (allowing the simulation operator, in other words, to run the clock either quickly or less quickly)? 
  • Does the offered simulator make it possible to start execution at any arbitrarily specified line in the program table?
  • Does the offered simulator make it possible, given an already-populated tape, to select any desired arbitrary square as the Boot Square before starting the simulation? 
  • Is the offered simulator supplied with abundant, and abundantly commented, samples of program tables? 
  • Are there useful accompanying tutorial materials, or other useful accompanying writeups? 
****

I may as well also - this is a small thing, I admit - report the availability on YouTube of vids showing the action of the "Digi-Comp 1" three-flipflop mechanical computer which I mentioned in "Part S":

  • In a vid entitled "Digi-Comp 1 Plastic Computer", to a length of 2:16, from YouTube user "perkiert", is a clear view of both sides of the device. The respective views make it plain that the short horizontal white soft-plastic programming tubes in the front have a mechanically different role from the long white soft-plastic programming tubes in the back. (In the front, such a (short) tube arrests the inward movement of a spring-loaded vertical front-side metal rod. In the back, such a (long) tube does not arrest any rod movement, but instead engages with a spring-loaded vertical back-side metal rod in such a way as to cause a red-plastic flipflop to be pushed by the rod in one or the other of the two pertinent horizontal directions.) In my corner of the Web, the vid can be had through the URL https://www.youtube.com/watch?v=Qx5Iawpm5Kg. As of UTC-20171213T1707Z or so, it had garnered 8,155 views.  
  • In a vid entitled "Digi-Comp 1", to a length of 0:42, likewise from YouTube user "perkiert", is a clear demonstration that Digi-Comp 1 supports infinite loops. (The device is in this vid programmed to count 000, 001, 010, 011, 100, 101, 110, 111, 000, 001, ... .) In my corner of the Web, the vid can be had through the URL https://www.youtube.com/watch?v=mP3eNQ6ftsM. As of UTC=20171213T1707Z or so, it had garnered 9.979 views. 
[This is the end of the current blog posting.] 






Toomas Karmo: Debian GNU/Linux 9.2 Released 2017-12-09

Quality assessment:

On the 5-point scale current in Estonia, and surely in nearby nations, and familiar to observers of the academic arrangements of the late, unlamented, Union of Soviet Socialist Republics (applying the easy and lax standards Kmo deploys in his grubby imaginary "Aleksandr Stepanovitsh Popovi nimeline sangarliku raadio instituut" (the "Alexandr Stepanovitch Popov Institute of Heroic Radio") and his  grubby imaginary "Nikolai Ivanovitsh Lobatshevski nimeline sotsalitsliku matemaatika instituut" (the "Nicolai Ivanovich Lobachevsky Institute of Socialist Mathematics") - where, on the lax and easy grading philosophy of the twin Institutes, 1/5 is "epic fail", 2/5 is "failure not so disastrous as to be epic", 3/5 is "mediocre pass", 4/5 is "good", and 5/5 is "excellent"): 2/5. Justification: This was a minor posting, with scanty content.


Revision history:

All times in these blog "revision histories" are stated in UTC (Universal Coordinated Time/ Temps Universel Coordoné,  a precisification of the old GMT, or "Greenwich Mean Time"), in the ISO-prescribed YYYYMMDDThhmmZ timestamping format. UTC currently leads Toronto civil time by 5 hours and currently lags Tallinn civil time by 2 hours.
  • 20171212T0201Z/version 1.0.0: Kmo uploaded an adequately finished version. He reserved the right to upload further tiny, nonsubstantive, purely cosmetic, tweaks over the coming 48 hours, as here-undcoumented versions 1.0.1, 1.0.2, ... .


[CAUTION: A bug in the blogger server-side software has in some past months shown a propensity to insert inappropriate whitespace at some points in some of my posted essays. If a screen seems to end in empty space, keep scrolling down. The end of the posting is not reached until the usual blogger "Posted by Toomas (Tom) Karmo at" appears. - The blogger software has also shown a propensity, at any rate when coupled with my erstwhile, out-of-date, Web-authoring uploading browser, to generate HTML that gets formatted in different ways on different downloading browsers. Some downloading browsers have sometimes perhaps not correctly read in the entirety of the "Cascading Style Sheets" (CSS) which on all ordinary Web servers control the browser placement of margins, sidebars, and the like. If you suspect CSS problems in your particular browser, be patient: it is probable that while some content has been shoved into some odd place (for instance, down to the bottom of your browser, where it ought to appear in the right-hand margin), all the server content has been pushed down into your browser in some place or other. - Finally, there may be blogger vagaries, outside my control, in font sizing or interlinear spacing or right-margin justification. - Anyone inclined to help with trouble-shooting, or to offer other kinds of technical advice, is welcome to write me via Toomas.Karmo@gmail.com.]


I describe briefly my (routine, unremarkable) experience with last week's system upgrade. 

My migration was from Debian GNU/Linux 9.2 to Debian GNU/Linux 9.3. As with my description here at blogspot on 2017-10-12 or 2017-10-13 of an upgrade from Debian GNU/Linux 9.1 to Debian GNU/Linux 9.2, so again tonight I document my experience in the hope that a writeup will prove mildly helpful to a few people in the general Linux community. A report that something has gone smoothly can itself sometimes be useful, by providing a form of reassurance.

****

On Saturday morning, 2017-12-09, I saw the release notice for Debian GNU/Linux 9.3 pretty much as soon as it reached my e-mail inbox. I therefore presume I was alerted to the availability of the upgrade at most just hours after the actual release. It was, however, Saturday evening before I upgraded my workstation. 

****

I had been incrementally updating against an appropriate Debian Project mirror every few days, under Debian GNU/Linux 9.2. I was therefore likely to have been in a "zero-updates-needed" position Friday night - in other words, to have had my own personal-workstation "9.2" fully current in respect of the formal "9.2" definition. Perhaps as a consequence of this, updating against Saturday's "point release" (9.3, as distinct from 9.2) went quickly. 

In its current configuration (stable for the last few weeks?), my workstation has 2157 installed packages. Just 29 of these had to be updated in Saturday's transition from 9.2 to 9.3. No existing packages had to be removed, and no new packages had to be added. 

I followed the Debian Project recommendation to reboot after upgrading. 

****

To track the activity on my workstation, I generated lists of my 2157 installed packages with Debian's /usr/bin/dpkg -l command, both before and after my upgrade-cum-reboot. To examine how the two listings differed, I used the usual /usr/bin/diff tool. I show here just a portion of my /usr/bin/diff result, selecting a portion of the output relating to the kernel:
  • linux-image-4.9.0-4-amd64             4.9.51-3
  • linux-image-4.9.0-4-amd64             4.9.65-3
The same result regarding the kernel emerged from /bin/cat /proc/version, which I performed both before and after my upgrade-cum-reboot:
  • Linux version 4.9.0-4-amd64 (debian-kernel@lists.debian.org) (gcc version 6.3.0 20170516 (Debian 6.3.0-18) ) #1 SMP Debian 4.9.51-1 (2017-09-28)
  • Linux version 4.9.0-4-amd64 (debian-kernel@lists.debian.org) (gcc version 6.3.0 20170516 (Debian 6.3.0-18) ) #1 SMP Debian 4.9.65-3 (2017-12-03)
It can be seen from this /proc/version material that the new and the old kernel binaries were generated, from their respective C-or-similar source files, by the pertinent authority under the same major.minor.patch version (namely, 6.3.0) of the GNU C compiler.

[This is the end of the current blog posting.]


Monday, 4 December 2017

Toomas Karmo: Part T: Philosophy of Perception, Action, and "Subjectivity"

Quality assessment:

On the 5-point scale current in Estonia, and surely in nearby nations, and familiar to observers of the academic arrangements of the late, unlamented, Union of Soviet Socialist Republics (applying the easy and lax standards Kmo deploys in his grubby imaginary "Aleksandr Stepanovitsh Popovi nimeline sangarliku raadio instituut" (the "Alexandr Stepanovitch Popov Institute of Heroic Radio") and his  grubby imaginary "Nikolai Ivanovitsh Lobatshevski nimeline sotsalitsliku matemaatika instituut" (the "Nicolai Ivanovich Lobachevsky Institute of Socialist Mathematics") - where, on the lax and easy grading philosophy of the twin Institutes, 1/5 is "epic fail", 2/5 is "failure not so disastrous as to be epic", 3/5 is "mediocre pass", 4/5 is "good", and 5/5 is "excellent"): 4/5. Justification: Kmo had time to develop the necessary points to reasonable length.
 
 
Revision history:
 
All times in these blog "revision histories" are stated in UTC (Universal Coordinated Time/ Temps Universel Coordoné,  a precisification of the old GMT, or "Greenwich Mean Time"), in the ISO-prescribed YYYYMMDDThhmmZ timestamping format. UTC currently leads Toronto civil time by 5 hours and currently lags Tallinn civil time by 2 hours.

  • 20171206T2251Z/version 3.0.0: Kmo finished converting his finegrained outline into coherent full-sentences prose. He reserved the right to make tiny, nonsubstantive, purely cosmetic, tweaks over the coming 48 hours, as here-undocumented versions 3.0.1, 3.0.2, 3.0.3, ... . 
  • 20171205T1953Z/version 2.0.0: Kmo uploaded a finegrained outline, itself duly polished. He hoped to convert this at some point or over (not at the moment easily predicted) in the coming 26 hours into coherent full-sentences prose). He planned to do the work incrementally, uploading a bit of the conversion at a time. 
  • 20171205T0238Z/version 1.0.0: Kmo uploaded a mere coarsegrained point-form outline.  He hoped, either (1) at some point in the next 3 hours or else (2) (less ambitiously) at some point in the next 16 hours but not in the next 3 hours, to finish converting his coarsegrained point-form outline into a finegrained point-form outline. Further, he hoped soon after completion of that conversion to find the three or four additional hours needed for upgrading his finegrained outline into coherent full-sentences prose.

 
[CAUTION: A bug in the blogger server-side software has in some past months shown a propensity to insert inappropriate whitespace at some points in some of my posted essays. If a screen seems to end in empty space, keep scrolling down. The end of the posting is not reached until the usual blogger "Posted by Toomas (Tom) Karmo at" appears. - The blogger software has also shown a propensity, at any rate when coupled with my erstwhile, out-of-date, Web-authoring uploading browser, to generate HTML that gets formatted in different ways on different downloading browsers. Some downloading browsers have sometimes perhaps not correctly read in the entirety of the "Cascading Style Sheets" (CSS) which on all ordinary Web servers control the browser placement of margins, sidebars, and the like. If you suspect CSS problems in your particular browser, be patient: it is probable that while some content has been shoved into some odd place (for instance, down to the bottom of your browser, where it ought to appear in the right-hand margin), all the server content has been pushed down into your browser in some place or other. - Finally, there may be blogger vagaries, outside my control, in font sizing or interlinear spacing or right-margin justification. - Anyone inclined to help with trouble-shooting, or to offer other kinds of technical advice, is welcome to write me via Toomas.Karmo@gmail.com.]


We have been examining the notion of an infinite bit sequence, in the sense of a sequence possessing a first binary bit, a second binary bit, and so on, and yet possessing no last binary bit.  I have already made some rather formal remarks, in almost a quasi-mathematical sense, on the troubling (because definition-recalcitrant) concept of a random infinite bit sequence. It is advisable to proceed through a quite long set of further formal remarks in this installment ("Part T"), and at the very least in some initial portion of the next installment ("Part U"), before finally standing back and attempting to draw philosophical morals about our current topic of philosophical concern within the Geography of Mind, namely the topic of thinking-about-being. 

****

It is convenient first to introduce some shorthand. Call an infinite bit sequence "Turing-orderly", or still more briefly "T-orderly", if it is the output of some never-halting Turing machine. Call an infinite bit sequence "Intra-Turing-orderly", or still more briefly "IT-orderly", if it both (a) is in some sense orderly, and (b) is not the output of any never-halting Turing machine. As noted in "Part R", from 2017-11-20 or 2017-11-21, a bit sequence - I called it the "Specially Troublinbg Orderly Sequence" or "STOS" - which itself sets out the entire sequence of never-halting Turing machines (given an encoding scheme which maps all Turing machines one-to-one onto the set of positive integers) (a) is in a reasonable sense "orderly", and yet (b) (by the Turing-insolubility of the Halting Problem) fails to be T-orderly. The STOS, then, is an example of an IT-orderly, as opposed to a straightforwardly T-orderly, infinite bit sequence. - I use the term "infra" to convey the idea that such a troubling sequence is orderly indeed, and yet attains only a level of orderliness "below", or "inferior to", the level of orderliness attained by a sequence which is straightforwardly T-orderly, or in other words is straightforwardly machine-computable. 

****

Whatever it may be for an infinite bit sequence to be random, such a sequence can at any rate not, on the strength of our once performing any one finite single-sample statistical test, be either proved random or proved orderly. Take any one finite sample, no matter how extensive and how cunningly selected, from a given infinite bit sequence (call it "Sam"). The outcome of any statistical test on the Sam-sample will be at best (a) subjectively, nonrigorously, suggestive of true randomness in Sam, or else (b) at best subjectively, nonrigorously, suggestive of true orderliness in Sam.

Regarding possibility "(a)": No matter how random-seeming a single sample of N bits out of Sam may be, it is possible that Sam as a whole is of the form Z Z Z Z ..., where Z is some (perhaps very random-seeming) finite bit sequence. For example, the finite bit sequence 110100101 looks rather random, and yet the infinite sequence 110100101110100101110100101110100101110100101110100101... (the result of repeatedly concatenating the finite string 110100101 to itself) is orderly. If Sam is a sequence of the repetitious form Z Z Z Z Z ..., then Sam is indeed not just orderly, but even T-orderly.

Conversely, regarding possibility "(b)": No matter how orderly-seeming a single sample of N bits out of Sam may be, if any random sequences exist at all, then it is possible that Sam is himself also random. For let Roger be the posited random sequence. The N-bit sample taken from Sam might, to be sure, have been selected ingeniously. The finite Sam-sample need not be anything as gauche (in both senses of that Gallic reproach) as a mere finite initial segment of Sam. Nevertheless, the sample cannot reach further than some particular bit in Sam, say Sam's kth bit (where k might, admittedly, be larger than N). For all the sample can tell us, it is possible that from his (k+1)th position onward, Sam is a mere copy of the (k+1)th, (k+2)th, ... bits of Roger. If Sam's bits eventually fall into a mere imitation of Roger's bits, then no matter what the actual definition of randomness might turn out to be, Sam must be deemed random.

****

In place of taking a single finite sample from Sam, one might now try, in this quest for the Holy Grail which is a definition of randomness-of-Sam, taking the infinite set of finite-samples-from-Sam meeting some appropriate condition. Consider, for example, a test constructed in terms of all finite initial segments of Sam. If Sam is 1011100010010111011..., then the selected finite samples from Sam become on this proposal 1, 10, 101, 1011, 10111, 101110, ... . One might hope that if Sam is truly random, the ratio of 1-bits to 0-bits in the successively larger finite samples "eventually approaches unity", in some formally definable sense of "eventually approaches". This would, after all, mirror the informal way in which people might satisfy themselves that a given coin is fair, as a preliminary to some envisaged game of coin-tossing. Such prospective gamers might make many tosses. They might check to see that as the number of tosses becomes large, the number of heads starts differing only slightly from the number of tails. If after, say, 100 trials, there are 48 heads and 52 tails, they might feel happy enough. Conversely, if after 100 trials there are 91 heads and just 9 tails, the prospective gamers would be liable to accuse their candidate coin of bias.

A condition along the lines proposed cannot, however, be sufficient, since the very-orderly Sam which is 10101010101010101010... would pass it.

Further, a condition along the lines proposed cannot be necessary, at any rate if there exists any random bit sequence at all. For suppose, hypothetically, that random bit sequences do exist. Let Roger be a random bit sequence. Roger can be used as a guide for constructing a Sam that harbours progressively more enormous unbroken finite runs of 0s. Since I obsess over the Dark Ages (pondering Latin with gusto when I face the past, and reading such contemporary English-language doomsters as John Michael Greer and Dmitry Orlov with equal gusto when I face the future), I like to think of this in terms of Barbarian Armies - "Armies of 0-bits".

Here is Sam, looking eminently orderly, looking ever-so-Roman, in all the finite initial segments so far inspected. So far, each finite initial segment has consisted entirely of 1s. But from the jth bit of Sam onward, we encounter a great, though as it eventually turns out finite, Army of 0-bits. The sinister parade of 0s does, I stress, eventually end. Alas, however: it does not end until it is the 0-bits that predominate overall, in the entire up-to-this-point start-from-the-extreme-left finite sampling of Sam. Perhaps we are happy enough over the orderly appearance of Sam's first 5 initial segments, and get a rude awakening only completing the inspection of Sam's 6th through 30th bits. In our growing despondency and alarm, we do find as the 31st initial segment 1111100000000000000000000000001. With Alaric, so to speak, at last withdrawing, and with the Roman Senate convening once again, we settle down to a long epoch of PAX ROMANA. (For instance, the 50th initial segment proves to be, to our joyous relief, to be
11111000000000000000000000000011111111111111111111.) For a long, long time, Sam is in a state of pleasing 1-itude. As for the 50th initial segment, with its unbroken string of right-end 1s, so also for the 51st, the 52nd, ..., right up  to perhaps the 674th. But just as our complacency assumes the proportions of ARROGANTIA SVPERBIAQVE, up slouches another Army of Zeroes. Now it is not, say, Alaric, but the markedly more dismal Attila. This fresh Army lasts so long that Sam's finite initial segments seem to be practically "Zero City", with only the tiniest fraction of their bits set to the pleasing 1. For Attila is dauntingly long, perhaps comprising a string of fully 9837 1s. At long last Attilla withdraws. The Roman Senate convenes again, and now there are lots of 1s, in fact two million of them. But as our civic complacency grows and grows (GAVDEAMVS, O CIVES, NVNC ETIAM VIGINTI CENTENA MILLE HABEMVS), in slumps Theodoric, with an Army of Zeroes that dwarfs even the horrors of Attila (and so on, and so on).

It might be objected in desperation that some regularity is still lurking, kinda-sorta, perhaps-perhaps, "at one significant level", namely in the timing of those successively longer barbarian parades. Without, however, scrutinizing the possible partial merits of the desperate objection, I note that it cannot in the final analysis prevail. I dismiss it with the final-analysis retort that the time between successive barbarian incursions might not only get longer and longer, but might even grow - within the admitted, perhaps quasi-orderly, constraint of strictly monotonic increase - in a random way, as guided by that fount of barbaric nastiness (that governing FONS ET ORIGO MALORVM) which is Roger.

****

Richard von Mises (1883-1953) tried to characterize random infinite sequences in terms of "proper selections". His idea was, I believe, that it either it was necessary or it was sufficient, or perhaps even that it was both necessary and sufficient (I have not investigated this history-of-ideas topic properly) for progressively larger "properly selected" finite subsequences from a truly random Sam to have their count-of-1s get in some formalizable way pleasantly close to their count-of-0s. His idea was a generalization of the more limited idea, dismissed above, of inspecting just Sam's finite initial segments. We are now not to be so naive, or gauche, as to take all and only the finite initial segments. Rather, under the guidance of von Mises, we are to take larger and larger "proper" finite selections.

The (surely nonrandom) Sam which is 101010101010101... looks distressingly random if we take merely Sam's finite initial segments, comparing in each of these infinitely many cases the respective accumulated finite populations of 1s and 0s. Such a selection is, however, for von Mises in some sense "improper". The (surely random) Sam which is Long PAX-ROMANA Epochs-of-1 interpunctuated with Terribly, Hideously Protracted Visigothic-Ostrogothic-Hunnic-Lombardic incursions of 0s (the times of the infinitely many interpunctuations being themselves cunningly Roger-regulated) looks from the standpoint of finite initial sequences rather orderly. It is, for much of the time, rather overweighted in 1s, and the overweighting never does go away for good. But such a selection, which may betray us into giving the over-optimistic verdict "orderly infinite sequence", is again, for von Mises, in some sense "improper".

Everything now turns on the feasibility of defining, in formal terms, the key concept of a "proper selection". I have the impression that von Mises tried hard, over his long and distinguished career at the mathematics-philosophy-physics interface. The consensus nowadays seems, however, to be that his efforts, however valiant, failed.

****

Now I turn to a contemporary attempt at characterizing randomness, inspired by a relevant idea of Kolmogorov's for finite sequences. I shall be suggesting that this contemporary attempt, while still leaving a kind of gap, does enjoy a success which the Gods of Mathematics denied in bygone decades to von Mises. Specifically, I shall be suggesting that the Kolmogorov-inspired contemporary attempt, while failing to give a necessary condition for randomness, does deliver a plausible sufficient condition. The attempt is (I gather) associated with the three contemporary workers Martin-Löf, Levin, and Schnorr.

Kolmogorov inadvertently launched the contemporary randomness-of-an-infinite-sequence work by posing a deep question regarding mere finite sequences. Although any finite sequence is the output of some Turing machine, some finite sequences are intuitively more orderly than others. An unbroken string of a billion (a thousand million) 1s, for instance, is intuitively more orderly than a string of a billion bits, all of them 1 except for a lone 0 in the second place. The latter is in turn intuitively more orderly than a string of a billion bits, all of them 1s except for 0s in the 2nd, 3rd, 5th, 7th, 11th, 13th, 17th, 19th, 23rd, ... places (in general, with 0s in all and only the pth places, for the prime numbers p in the set {2, 3, ... , 1 billion}).

One's first inclination is to say, "Well, that is a mere subjective impression, a worthless intuition. From a mathematical standpoint, any sequence comprising (say) one billion bits must be just as orderly as any other." But I gather that Kolmogorov asked, in essence, the question, "How long is the (finite) internal program table for each of the various Turing machines generating the various billion-bit sequences?"

A rather small Turing machine suffices to generate the all-1s billion-bit sequence (One could somehow specify a counted loop, to execute exactly a billion times. Surely this can be done by conferring on the machine an internal program table of far fewer than a billion lines.)

A slightly more elaborate Turing machine seems needed for generating the second-cited billion-bit sequence. Now we might well use a counted loop, as before, while running the loop just 999,998 times. This slightly more elaborate machine would be programmed to start its loop after first both printing 1 in its Boot Square and printing 0 in the square immediately to the right of its Boot Square.

For the third-cited billion-bit sequence, a more elaborate Turing machine seems necessary, although I would imagine its still having far fewer than a billion lines in its program table. Perhaps a reasonably economical such machine will actually discover all the prime numbers in the set {2, 3, ... , 1 billion}, say by running the Sieve-of-Eratosthenes algorithm, and on the strength of its numerous discoveries then work out where to write its numerous 0s.

The Kolmogorov idea is that as we come to the "very disorderly billion-bit sequences", such as might be procured by running a device resembling John Walker's random-bits server (https://www.fourmilab.ch/hotbits/; this was mentioned here in "Part P", from 2017-10-23 or 2017-10-24), the program table begins to resemble in its now-formidable length the target billion-bit sequence itself. If we cannot discern a pattern in the billion bits, we will have to tackle the programming task by sheer brute force, instructing the machine to "print this in the Boot Square, then that in the next square to the right, and then such-and-such in the following square", where each "this" and "that" and "such-and-such" is the desired bit (and so on and so on, for a billion or so wearisome, plodding lines of internal Turing-machine program table).

To be sure, an objection of a kind does suggest itself. Could the length of a sequence-outputting machine program depend in some conceptually crucial way on the specific prior selection of deteriministic machine architecture? Could it be that the third of my cited billion-bit sequences, for instance, is compactly computable by a deterministic machine of some suitably clever, not-quite-orthodoxly-Turin, hardware design - say, one incorporating in its underlying hardware, as opposed to its internal program table, something to deliver the effect of a lookup table listing the first few hundred million primes?

I suppose (without having gone into this at all carefully) that we could now somehow appeal to the notion of a "legitimate machine", thereby blocking such ad-hoc things as an internal, hard-wired, prime-number lookup table. Maybe (say I - without, I stress, having gone into this at all carefully) we could demand that any "legitimate machine" be capable of being programmed to act as a Universal Machine (in the sense of last week's Universal Turing Machine "U"), and that it in some sense contain "no proper submachine of itself" capable of being thus programmed.

We could also (say I, without having gone into this at all carefully) put the central, philosophical-as-distinct-from-purely-mathematical, idea in terms of a challenge. Last week, I noted that the historical Prof., A. Turing sought to capture the philosophical essence of computing, through a "Thesis" which I worded thus: any computation that could be performed (perhaps efficiently, i.e., perhaps in some very small number of clock cycles) on anything reasonably called a "deterministic computer" could be performed also (perhaps less efficiently, i.e., perhaps in some larger number of clock cycles) on a Turing machine. 

I added the following comment last week:  It is impossible in principle to prove the "Thesis" through mathematical argument, since its notion of "anything that could reasonably be called a 'deterministic computer'" is not mathematically formal. Nevertheless, no examination of the powers of any particular deterministic computing-machine architecture, from the time the Thesis was proposed (and it goes way back, to before the war, I think in fact to a 1936 November address by Prof. A. Turing to the London Mathematical Society) right up to the present day has succeeded in delivering any counterexample.

A similar situation perhaps holds for Kolmogorov. As one might perhaps phrase it: For anything that could be called a "Kolmogorov-legitimate deterministic computer" K, K imposes a "measuring of disorderliness" on billion-bit sequences, such that the generating of the progressively "intrinsically more disorderly" billion-bit sequences requires progressively longer internal K program tables. 

An appropriate comment on this week's "Kolmogorov Thesis", in the spirit of last week's comment on the "Thesis" which last week I associated with Prof. A. Turing, would then be the following: It is impossible in principle to prove the "Kolmogorov Thesis" through mathematical argument. Nevertheless, the mathematical community may legitimately be challenged to find plausible counterexamples.  Nobody from Kolmogorov's time (back in the Cold War) right up to the present day (even 2017 is now almost over) has succeeded in delivering any duly publicized counterexample. 

(Well, folks, you will note the advisability of e-mailing Toomas.Karmo@gmail.com should it turn out, contrary to what I think is the state of the literature, that someone has actually published a counterexample to what I am calling the  "Kolmogorov Thesis".)

Provisionally accepting the Kolmogorov Thesis, we may now proceed in the spirit of the already-mentioned contemporary workers Martin-Löf, Levin, and Schnorr. From a hasty glance at just a couple of pieces of encyclopedia-level writing (I think both in Wikipedia and in the multivolume 1990s Routlege encyclopedia-of-philosophy), I gather that the idea (henceforth, the "The K-Idea", to give Kolmogorov his due) is as follows:

  • Inspect, not in von Mises' spirit "properly selected" finite samples from Sam, but in a more gauchely robotic spirit simply each of Sam's successively longer finite initial segments. 
  • It might be that for some N, each finite initial Sam-segment of length j greater than or equal to N proves rather strongly Kolmogorov-orderly. (This would be trivially the case if Sam was the infinite sequence 11111... . It would also be the case, although less trivially so, if Sam was orderly in a kind of prime-numbers way - consisting, for instance, entirely of 1s, except that for every prime p, Sam has a 0 in his pth position.)
  • It might be that for some N, each finite initial Sam-segment of length j greater than or equal to N proves rather strongly Kolmogorov-disorderly (through requiring - I reiterate - in some duly formalizable sense of "not significantly", a Kolmogorov-legitimate internal machine program table with not significantly fewer than j entries).
Although these are not the only possibilities,  I may as well stop here. Let us call the possibility sketched in the last of these three bullet points a situation in which "Sam's finite initial segments eventually become Kolmogorov-disorderly". I gather that this is the possibility to which Martin-Löf, Levin, and Schnorr direct special attention. Have we in this third bullet point, then, a condition (a) necessary for randomness-of-Sam, and (b) sufficient for randomness-of-Sam?

Regarding "(a)": as far as I can see, necessity fails, for reasons once again connected with barbarian incursions. (But I am subject to correction by readers: please e-mail Toomas.Karmo@gmail.com as appropriate.) Sam might be random in a cunning way, making him from a K-Idea standpoint look non-random. Perhaps Sam starts off with a long unbroken string of 1s. Eventually Alaric comes slouching in, with his dispiriting army of 0s. So many 0s are there in this First Sack of Rome - so greatly do they overwhelm the admittedly initially-plentiful 1s - that any Kolmogorov-legitimate machine computing the finite initial Sam-segment that ends with the First Sack has a rather easy time of it, and therefore is rather small in comparison with the big finite bit-string it is tasked with computing. (That finite initial segment is dominated, thanks to the so-protracted Sack, by 0s. A lot of the programming can therefore be achieved in a terse way, by directing the Kolmogorov-legitimate machine into a counted loop.)

Now come, again, lots of 1s, and the requisite Kolmogorov-legitimate machines get a bit longer in proportion to the lengths of the successive finite initial Sam-segments.

And then, perhaps a very, very long time later, it is Theodoric's turn.

So, as far as I can this week see, things never do settle down: Sam is cunningly random, and yet our successive parade of successive Kolmogorov-legitimate coumputing machines never does stop featuring a machine with some dramatically compact internal program table (compact, that is, in comparison with the long task-required finite initial Sam-sequence).

Regarding "(b)": Here things look happier. I cannot at least find an objection. My sole intelligent, or semi-intelligent, remark is that one would ultimately entertain the pious hope of a proof, perhaps through some outright exhibited construction, that the Kolmogorov condition can be met. One would hope actually to see some recipe (or, failing that, some recipe outline - some sketch) for building a Sam whose successive initial segments eventually settle down into a reliable, steady state of Kolmogorov-disorderliness.

The situation is here to me (I admittedly write as a rank amateur) a bit like the situation with that "field", in the abstract-algebra sense, which is the reals. We have the abstract algebraic definition of a field - as at, e.g., https://en.wikipedia.org/wiki/Field_(mathematics) - and we have (let us say) some professorial assurance, or even have ourselves mastered some formal proof, that the rationals, with the usual multiplication and addition operations, and with the usual additive and multiplicative identity elements and inverses, satisfy the clauses of the definition. Can we now proceed in confidence to build up the elaborate edifice of, say, univariate differential-and-integral calculus, as a topic within the real numbers?

We cannot. We must first prove that the reals themselves satisfy the clauses in the definition of a field. This takes lots of work, in which the reals get constructed as convergent infinite sequences of rationals (or, alternatively, as Dedekind cuts within the rationals), and in which those constructed objects are themselves proved to satisfy the abstract, algebraic, field-definition clauses upon getting endowed with plausible definitions of addition, of multiplication, of additive and multiplicative identity, and of additive and multiplicative inverse. Until this work is performed, we cannot be free of the nagging fear that somehow, at some hard-to-expose level, the concept of real numbers, with their completeness property that renders calculus possible (in one statement, "Every nonempty set of reals bounded above has a least upper bound"), harbours a contradiction. (In classical antiquity, Zeno already worked hard along something like these lines. The modern consensus, to be sure, is that Zeno's arguments fall to rebuttals. But in coming centuries, might some fresh Zeno not emerge, defying all efforts at rebuttal - thereupon destroying the intellectual edifice of our current mathematical physics, which has calculus in its lower floors?) - That is the work to which a group of us first-years were exposed at Dalhousie University, in Nova Scotia, in the winter semester of 1971, under Prof. Max Edelstein (1917-2003). I further believe the (formidable) work has traditionally been made available to visitors of mathematics libraries from a textbook of that particular Landau who is from Germany, as distinct from Russia (not the much-mourned Soviet Lev Davidovich Landau (1908-1968), but the also eminent, and historically somewhat earlier, German, Edmund Georg Hermann Landau (1877-1938)).

Even the professional mathematicians do have from time to time to live in the pious hope of some so-desperately-needed construction simply emerging. People were doing calculus, and with it mathematical physics - and perhaps even, for all I as an amateur know, attaining some degree of rigour - before the reals finally got constructed out of the rationals.

- Well, actually, I can supply a still more terrifying example of professional "pious hope". Even the most squalid, beer-swilling, Homer-Simponesque engineers - on the University of Toronto, those exuberant party-goers who inhabit the raucous eastern side of St George Street, in their various noisy little barracks, as the physicists look down on them from the west side of St George, from soberly silent, suitably lofty, perches in their Burton Tower - will pretty soon in their work have to invoke "Fubini's Theorem". Fubini's is the theorem that permits the interchange, at least under any conditions likely to be satisfied in the problems arising within mainstream physical science, of the limits of integration in those nested univariate integrals which become the practical tool for simplifying the Riemann integral of a multivariate function mapping n-tuples of reals into the reals. Everyone, from Homer Simpson in first- or second-year engineering upward, has to interchange limits of nested univariate integrals in practical work. So when did Fubini construct his so-mission-critical proof? Hold onto your seats, everyone. It was not in the floruit of Newton and Leibniz. It was not in the (Victorian) floruit of Dedekind and Weierstrass. If https://en.wikipedia.org/wiki/Fubini%27s_theorem has the history right, the proof came so late that the parents of millions of people still living were born after Fubini constructed it, when Queen Vicky was already some years dead - so late as 1907.

****

It is now time to set homework.

I first introduce further formalities. Although these are of my own devising, they are in their way rather reminiscent of von Mises, as described above.

We have already divided the infinite bit sequences, exhaustively and disjointly, into the random and the orderly (while, admittedly, leaving open the question whether random bit sequences exist at all). The universe of orderly bit sequences we have divided into the Turing-orderly (the "T-orderly") and the "less-than-Turing orderly", or "infra-Turing orderly" (the "IT-orderly"). - Further, we have used the Halting Problem literature to show that not only the set of T-orderly infinite bit sequences, but even the so-subtle set of IT-orderly infinite bit sequences, is nonempty.

It is now time to introduce further terminology. I introduce it, as I say, in something of a von Mises manner.

Consider those random infinite bit sequences Roger (supposing, for the moment, that at least some random infinite bit sequences do exist) having the following property:

  • Every infinite subsequence of Roger selected by some Turing machine or other is itself random. (In different words: "Every T-selected infinite subsequence of Roger is itself random.")
For possible convenience, I will call any such Roger a "Green Roger", and the set of Green Rogers "Greenland".

Consider, next, those random sequences Roger having the following contrasting property:

  • Every infinite subsequence of Roger capable of being selected from Roger in some orderly way, but incapable of being selected in a Turing-orderly way from Roger, is itself random. (In different words: "Every IT-selected infinite subsequence of Roger is itself random." - If h1, h2, h3, ... are the successive integers denoting never-halting Turing machines, as in these present blogger-cum-blogspot discussions of the Halting Problem, then one IT-selected infinite subsequence of Roger will be the infinite sequence created by taking Roger's h1-th, h2-th, h3-th, ... bits.) 
For possible convenience, I will call any such Roger a "Red Roger", and the set of Red Rogers "Redland".

Finally, consider those random sequences Roger having the following, definitionally more constrictive, property:

  • Every infinite subsequence of Roger capable of being selected from Roger in any orderly way at all (be that way T-orderly or, by contrast, be it merely IT-orderly) is itself random.
Any such Rogers would have to be both Red and Green. It is therefore convenient, in this season of Advent, to call any such Rogers "Yuletide Rogers", and the set of such both-Red-and-Green Rogers "Yule Country".

Finally, it is convenient to call that part of Greenland which lies outside Redland (equivalently, which lies outside Yule Country) "Extreme Greenland", and to call that part of Redland which lies outside Greenland (equivalently, which lies outside Yule Country) "Extreme Redland".

A first homework question is now this: what, if anything, can we assert, or even conjecture, regarding the emptiness or nonemptiness of Extreme Greenland, Extreme Redland, and Yule Country - and, for that matter, regarding the emptiness or nonemptiness of Greenland (this is definitionally broader than Extreme Greenland, definitionally overlapping, as it does, Redland) and about the emptiness or nonemptiness of Redland (this is definitionally broader than mere Extreme Redland, definitionally overlapping, as it does, Greenland)? What, in other words, if anything, can we assert or conjecture regarding the nonemptiness of the five just-mentioned bits of terrain, under the governing assumption that some random sequences of some kind or other do exist?

A second homework question can be set up more quickly. Kolmogorov has, as noted this week, apparently given a sense to the question "How orderly - just feebly orderly, or not-so-feebly-orderly - is a given finite bit sequence?" What sense, if any, might now be given to such terms as "feebly random infinite bit sequence", "random infinite bit sequence which is not-so-feebly random", "random infinite bit sequence which succeeds in being vigorously random", and the like?

Finally, here is a third homework question. If any random infinite bit sequence Roger exists at all, then there are such things as "infinite random bit sequences differing only trivially, i.e., only in orderly ways, from Roger himself". For suppose, hypothetically, that random bit sequences do exist, and let Roger be one of them. Let Roller be the result of rolling each of Roger's bits over, in the sense that Roller has 1 wherever Roger has 0, and 0 wherever Roger has 1. Since a Turing machine suffices for converting Roger into Roller, or Roller into Roger, Roller and Roger might be called "T-similar infinite random sequences".

An equally Turing-feasible, although less compactly Turing-feasible, variant on Roger is Roper. Roper agrees with Roger in his jth position, for every j which is not prime. At each of the P-positions, on the other hand (the 2nd position, the 3rd position, the 5th position, the 7th position, the 11th position, ...) Roper has 1 if Roger has 0, and has 0 if Roger has 1. Again, Roger and Roper might be called "T-similar infinite random sequences".

Further, since the internal program tables for the two Turing machines just envisaged can be amalgamated into the internal program table for a single, more intricate, Turing machine, Roller and Roper themselves prove to be "T-Similar infinite random sequences".

What order-of-infinity questions now arise in the topic of T-Similar Random Infinite Bit Sequences - and, for that matter, in the separate-but-parallel topic of IT-Similar Random Infinite Bit Sequences?

In working on this, it is conceivably helpful to recall two basic observations from previous weeks:

  • The set of all Turing machines is of the same order of infinity as the set of positive integers. 
  • The set of all infinite bit sequences is of a higher order of infinity than the set of positive integers. 
[This is the end of the current blog posting.]









Wednesday, 29 November 2017

Toomas Karmo: Part S: Philosophy of Perception, Action, and "Subjectivity"

Quality assessment:

On the 5-point scale current in Estonia, and surely in nearby nations, and familiar to observers of the academic arrangements of the late, unlamented, Union of Soviet Socialist Republics (applying the easy and lax standards Kmo deploys in his grubby imaginary "Aleksandr Stepanovitsh Popovi nimeline sangarliku raadio instituut" (the "Alexandr Stepanovitch Popov Institute of Heroic Radio") and his  grubby imaginary "Nikolai Ivanovitsh Lobatshevski nimeline sotsalitsliku matemaatika instituut" (the "Nicolai Ivanovich Lobachevsky Institute of Socialist Mathematics") - where, on the lax and easy grading philosophy of the twin Institutes, 1/5 is "epic fail", 2/5 is "failure not so disastrous as to be epic", 3/5 is "mediocre pass", 4/5 is "good", and 5/5 is "excellent"): 4/5. Justification: Kmo had time to develop the necessary points to reasonable length.


Revision history:

All times in these blog "revision histories" are stated in UTC (Universal Coordinated Time/ Temps Universel Coordoné,  a precisification of the old GMT, or "Greenwich Mean Time"), in the ISO-prescribed YYYYMMDDThhmmZ timestamping format. UTC currently leads Toronto civil time by 5 hours and currently lags Tallinn civil time by 2 hours.
  • 20171201T0228Z/version 2.1.0: Kmo corrected a slew of mistakes or infelicities, some of which (for instance, a misstatement, thanks to clumsy typing, of the  ENIAC launch year) must count as errors of substance. - Kmo reserved the right to make tiny, nonsubstantive, purely cosmetic, tweaks over the coming 48 hours, as here-undocumented versions 2.1.1, 2.1.2, 2.1.3, ... . 
  • 20171201T0154Z/version 2.0.0: Kmo finished converting his outline into coherent-sentences prose. He reserved the right to make tiny, nonsubstantive, purely cosmetic, tweaks over the coming 48 hours, as here-undocumented versions 2.0.1, 2.0.2, 2.0.3, ... . 
  • 20171130T1735Z/version 1.1.0: Kmo expanded his outline in several comparatively minor respects.
  • 20171130T0525Z/version 1.0.0: Kmo managed to upload a fine-grained outline, pretty close to its stage of final polishing. He hoped to convert this into coherent full-sentences prose by 20171201T0001Z or 20171201T0101Z  or 20171201T0201Z or so.

[CAUTION: A bug in the blogger server-side software has in some past months shown a propensity to insert inappropriate whitespace at some points in some of my posted essays. If a screen seems to end in empty space, keep scrolling down. The end of the posting is not reached until the usual blogger "Posted by Toomas (Tom) Karmo at" appears. - The blogger software has also shown a propensity, at any rate when coupled with my erstwhile, out-of-date, Web-authoring uploading browser, to generate HTML that gets formatted in different ways on different downloading browsers. Some downloading browsers have sometimes perhaps not correctly read in the entirety of the "Cascading Style Sheets" (CSS) which on all ordinary Web servers control the browser placement of margins, sidebars, and the like. If you suspect CSS problems in your particular browser, be patient: it is probable that while some content has been shoved into some odd place (for instance, down to the bottom of your browser, where it ought to appear in the right-hand margin), all the server content has been pushed down into your browser in some place or other. - Finally, there may be blogger vagaries, outside my control, in font sizing or interlinear spacing or right-margin justification. - Anyone inclined to help with trouble-shooting, or to offer other kinds of technical advice, is welcome to write me via Toomas.Karmo@gmail.com.]

A spasm of emotional illness makes me late with blogging this week. In what is left of the week, I cannot neglect my maths studies. 

I may as well report in so-to-speak parentheses here, since I might thereby in one or another way help one or two readers, what those studies currently comprise. At the moment, I am working, slowing and painfully, from the fourth chapter of Michael Spivak's terse, and universally dreaded, Calculus on Manifolds: A Modern Approach to Classical Theorems of Advanced Calculus (Addison-Wesley, 1965). At the moment, my core problem is the following: given positive integers n and k, and given a vector space V, of dimension n, over the reals, find a basis for the vector space Ω-superscript-k(V)which is the ensemble of alternating k-tensors on V (where, for example, if V is the usual "Euclidean real 4-space" of quadruples-of-reals, then the determinant, as a four-argument function on the four columns of a 4-by-4 real matrix, is an alternating 4-tensor on V). 

Why tackle such a sterile-seeming core problem? What can this have to do with my actual life mission - anchored as much of it is on the physics side of the borderland where physics shades into philosophy, and anchored as much of it also is in efforts to address what I predict will some decades from now be an encroaching Dark Age?

For one thing, tensors are necessary in putting the mathematical physics of radio, notably Maxwell's equations, into the proper Special-Relativity framework. Here is something with ultimately even severely practical implications, as we seek to lay correct conceptual foundations for a survival technology, radiotelegraphy, in our probable impending Dark Age. (Even radiotelegraphy, let alone the more laborious fields of radiotelephony, television, and TCP/IP-supporting radio, bristles with subtleties, for which mathematics is relevant - not just in the design of suitably stable oscillators, or again of suitably low-loss feed lines, but also something relevant to even humble improvised emergency stations out in the field, namely the design of antennas.)

There is also a second thing, again ultimately connected with the foundations of  eventual Dark-Ages engineering. Thermodynamics involves not just energy, but entropy. Entropy is introduced with a bit of hand-waving in the first-year physics textbooks, where authors note that an ideal-gas Carnot engine traversing a closed curve on the pressure-against-volume plot generates an integral, its integrand naively written as ratio-of-transferred-heat-to-temperature-at-which-transfer-is-being-made, with the pleasing value zero. It sounds plausible from such a first-year discussion that there should be an intrinsic property of the Carnot engine's consignment of ideal-gas working substance  (a property to be called "entropy"), which is conserved when the engine is taken around a closed curve in the pressure-versus-volume plot.

Such a closed-curve plot is what we get when the Carnot engine (a) takes heat in from its hot reservoir, moving along an isotherm on its pressure-versus-volume plot while pushing its piston out; and (b) continues to push its piston out when moving along a zero-heat-transfer curve, or "adiabat", as its working substance cools (for the adiabat, the engine cylinder is enclosed in an insulating jacket); and then (c) drives its piston in while rejecting heat to the cold reservoir, along a cooler isotherm; and finally (d) (with the insulating jacket once again applied) moves up a (warming) adiabat, with the piston continuing to compress the working substance until  the engine is back at its start-of-cycle volume and pressure. With the piston back at that pressure-and-volume starting point, the engine is to be regarded as having executed "a full cycle". It would be natural to arrange a crankshaft so that one such piston oscillation corresponds to a 360-degree rotation of a drive shaft. 

But how to make the introduction of entropy mathematically rigorous? Here, alas, it becomes necessary to ponder, in a context of tensors, such puzzling things as "differential forms" that are perhaps "merely closed", and perhaps "not merely closed but even exact", and ultimately to construct bivariate-integrand integrals involving pressure and volume. One then recalls, with a special horror, a sinister professorial pronouncement, unclear to some of us students in the day, from the 1991-September-through-1992-May MAT257 of the University of Toronto: "The natural objects of integration are not in a naive general sense functions but, more subtly, are differential forms." And one also recalls, in the rising horror, the warning of Bell Labs associate-or-alumnus Dr John S. Denker, from material on his https://www.av8n.com/physics/, that it is differential forms that are the objects naturally housed within the integrals of conceptually clean thermodynamics.

Lest anyone find this academic, I remark that in the probable coming Dark Age, not radiotelegraphy alone but also Sterling engines might emerge as a survival technology. In both cases, the underlying mathematics will have to be clearly taught in what universities remain, perhaps drawing on remaining Web tutorial materials such as I myself eventually hope to be writing, perhaps right here on blogspot. (Some of the prospective Sterling engines will no doubt be driven by solar power, as in the case of the already-marketed "Sunpulse 500" (http://www.sun-orbit.de/). In other cases, heat will no doubt come from burning biomass. We may well, in particular, picture Dark Age cities driving dynamos with stationary Sterling engines, to feed overhead catenaries for light-rail transprot, as the internal-combustion petrol engine is finally recognized for the environmental dead end that it is. There are our kinda-sorta humane Dark Ages, folks, if we are willing to make appropriate anticipatory efforts now: lots of village-level radiotelegraph offices, once the Internet proves too intricate and too costly to maintain, and likewise lots of cross-country tram lines. With cheap telegrams and cheap tram tickets, life might still be livable.)

So much for the universally dreaded Spivak, then, and for my necessary concentration on his treatment of tensors, and for the universally dreaded idea of "differential forms as the natural objects of integration". Spivak being what he (alas) is, I must dispatch the blogging as briskly as I can this week. Instead of tackling last week's homework, I will this week make just two contextual, in other words two general-background, remarks on Turing machines. My remarks are intended to be in some modest ways illuminating for a few potential readers - if for no others, then at least for that (percentually non-negligible?) segment of my (tiny) readership which either is not at all conversant with, or else is only mildly conversant with, formal logic.

****


(1) Prof. A. Turing's model was meant to capture the essence of computing, in the sense of the following "Thesis": any computation that could be performed (perhaps efficiently, i.e., perhaps in some very small number of clock cycles) on anything reasonably called a "deterministic computer" could be performed also (perhaps less efficiently, i.e., perhaps in some larger number of clock cycles) on a Turing machine. 

The Thesis might at first seem surprising. A Turing machine is at first glance architecturally unlike the computers commercially available. The infinite tape in a Turing machine is a counterpart of today's multi-faceted memory. (The CPU-chip registers are one facet; the CPU-chip caches are another; the motherboard RAM is another; and finally there are the internal hard drives, with broadly analogous "media" such as removable SD cards and removable USB sticks.) In today's machines, however, this multi-faceted memory stores both data and the successive instructions of the program. In a Turing machine, by contrast, program and data are sharply separated. In a Turing setup, data get fed into the processing unit from the tape, and get written to the tape by the processing unit, whereas the program is kept not on the tape at all, but in a conceptually separate "internal program table". The Turing-machine situation is unlike what we have with a modern deterministic computer, from the 1949 Cambridge University EDSAC 1 onward, but instead resembles the simpler situation of the wartime British "Colossus" codebreaker.

Another machine in the same class as the Colossus, in the sense that its program is kept separate from its data, is the 1946-vintage American ENIAC.

Colossus and ENIAC indeed logically resemble also my own first computer, the "Digi-Comp 1" which I respectfully requested of my dear parents as a 1966 Christmas present. (I had duly scrutinized that wish-book which was the Eaton's or Sears 1966 Christmas catalogue, and must have found Digi-Comp 1 more appealing than the offered microscopes and chemistry sets.)

Here is either the 1960s Digi-Comp 1 or a faithful 21st-century recreation of it, as depicted at http://willware.blogspot.ca/2013/03/the-digi-comp-1-rides-again.html:



(As is usual with blogger-cum-blogspot, the image can be enlarged with a mouse-click.)

It might perhaps perhaps already be guessed from the image that programming was achieved by putting white plastic tubes onto the various pegs of the three horizontal sliders. The sliders then acted as a three-bit memory, correctly described in the accompanying documentation as a triple of "flip-flops". They moved back and forth when a hand actuator (the white handle, appearing on the lower right in this image) was pulled back and forth by the computer operator. Corresponding to a clock cycle was a full oscillation of this handle. (Perhaps - my recall is uncertain - the handle had to be pulled all the out out to the right, then pushed all the way back in to the left.) The placement of the white plastic tubes on duly selected pegs, and the various slider-to-slider linkages achieved by the various spring-loaded metal rods as they encountered white plastic tubes, gave the effect of Boolean logic gates.

Various pleasant things could be done. For instance, an appropriate deployment of white tubes on red slider pegs would make the machine count, as one pumped its clock, from 000 (in decimal notation, 0) up to 111 (in decimal notation, 7). And I am sure, although to my regret I do not possess adequately vivid direct recall on this point, that the possibilities for conditional execution were varied enough to permit the machine to be sent into some kind of infinite loop.

In today's machines, it is possible in principle - however dirty a kludge this would be deemed in actual programming practice - for a program to rewrite itself. We surely can at least do it in machine language (in the sequence of EDSAC-style numerical instruction codes that an "Assembler" generates from those more user-frendly "Assembly Language" instructions - such as, I would imagine it, Decrement Register A by one, or again Load into Reigster B the data in the motherboard-RAM location presently specified by the numerical contents of Register C, or again If the number in Register D is greater than or equal to the number in Register E, then execute the raw-machine-language instruction presently constituted by the numerical content of Register F.

As it might be (for a program-rewriting scenario): A program, coded in machine language as a sequence of 64-bit words, is stored in motherboard-RAM memory locations expressible in decimal terms as locations 12,548 through 98,776. Before the machine is started, data is loaded into memory locations 98,777 through 99,552. A pointer, perhaps within the CPU, indicates which instruction is currently being executed. At startup time, this is (let us say, for tidiness) the instruction at motherboard-RAM location 12,548.

At first, the machine steps through its instructions sequentially, reading instructions from locations 12,548, 12,549, and 12,550 through 12,571, and therein copying several words of data from the motherboard RAM into registers within the CPU. The ongoing parade of instructions soon causes some arithmetical operation to be performed on the contents of a couple of the registers, and some data to be written into, say, locations, 100,001 and 100,002. Execution next passes on the strength of an "If A, go to P; if not A, go to Q" test some distance down - to, say, instruction 44,587. For a while, things continue looking rather tidy, with more data words being read in from, say, some data locations around 99,600, and with the results of various computations being written to such locations as 100,002 (again) and 100,003 (for the first time).

Now, however, comes a kludge which we would not expect in the polite world of C or Java, but which is perhaps less unexpected in the rude, crude, EDSAC-flavoured world of raw machine language. (Admittedly, on 2017-era hardware, we might have to start "running our machine code on a bare machine",  in other words might have to forego the special memory protections prudently enforced on us by the usual operating systems, such as Microsoft, MacOs, and GNU/Linux.) The program performs a computation - under the directions of, as it might be, the raw-machine-code word stored at motherboard-RAM location 77,770 - and writes the result into location 77,769, thereby overwriting its own program. Soon enough, execution passes, thanks perhaps to some "If A, then go to P; if not Q, go to Q" test, to location 77,769. - The upshot of the dirty kludge is that the machine is executing an instruction which it wrote for itself, and which was not present in memory at the instant of machine startup.

So might it not be that an ESCAC-onward machine, capable in principle (as just explained) of rewriting its own program when the program runs, possesses powers exceeding the powers of any Turing machine? The above-mentioned "Thesis" answers this question in the negative: No, limited though Turing machines are in comparison with the subtler architecture of 1949-vintage EDSAC-style (and indeed of 2017-vintage) machines, whatever can be done by such machines - capable, in particular, as just noted, even of rewriting their own programs, at least once we venture "close to the metal" through the direct loading of raw machine code - can be done already by a Turing machine.

It is impossible in principle to prove the "Thesis" through mathematical argument, since its notion of "anything that could reasonably be called a 'deterministic computer'" is not mathematically formal. Nevertheless, no examination of the powers of any particular deterministic computing-machine architecture, from the time the Thesis was proposed (and it goes way back, to before the war, I think in fact to a 1936 November address by Prof. A. Turing to the London Mathematical Society) right up to the present day has succeeded in delivering any counterexample.

I wrote above that a Turing machine is "at first glance" architecturally unlike the computers we nowadays use. On a second, more careful, inspection, the two classes of machine prove alike after all. The reason for this is that among the many possible Turing machines, there is the "Universal Turing Machine", say U.

U (at any rate in the details I shall adopt here - surely there will be inconsequential variations of detail in the many textbooks) is intended to be started on a tape which is blank except that (a) starting from its Boot Square, and proceeding rightward, there is an unbroken sequence of j 1's, for some j = 1, 2, 3, ... ; and (b) after the final 1 in that sequence, either (b.a) there are only blanks, or (b.b) there is, after a single blank, another unbroken sequence of exactly k 1's, for some k = 1, 2, 3, ... , and thereafter nothing but blanks.

Last week, I sketched a way of mapping all Turing machines one-to-one onto the positive integers. In the same spirit, we can find a way of mapping all possible Turing-machine inputs (each of these is some finite sequence of the twenty-nine symbols BLANK, 0, 1, a, b, ... , z) one-to-one onto the non-negative integers. - To make this tidy, we may as well take it here that the special case of an input comprising only blanks (the "null input") is mapped to the non-negative integer 0.

U works as follows: where j is the (unique) positive integer representing the particular Turing machine M, and k is the (unique) non-negative integer representing the particular input I, U eventually writes, in some convenient pre-specified format in some convenient pre-specified area of its tape, the same output as would be generated by starting M with input I. If, in particular, M never stops, and M writes some infinite output sequence to the tape, then U writes that same output sequence. 

Then U is in the following sense like 1949-vintage EDSAC 1, as opposed to the wartime Colossus, the 1946-vintage ENIAC, and indeed the Chrismas-of-1966 Digi-Comp 1: (i) on one and the same piece of architecture (its tape, as opposed to its internal program table) is both a program (the initial string of j 1's, for some positive integer j) and some "data" (the initial string of k 1's, for some non-negative integer k); and (ii) U has the mission of "operating according to the program on the given data".

****

(2) Last week, I wrote the following: I have not myself taken the trouble to review in recent years the proof that no Turing machine solves the "Halting Problem". But I do know, from my 1970s or 1980s studies (a) that the proof is not particularly long, and (b) that the proof uses an argument much in the spirit of the Cantor argument establishing that there is more than one "order of infinity" /.../.

Having this week managed to construct a version of the proof without cheating through looking it up, I may as well try to be helpful by showing my modest work. Surely my own work differs only in inconsequential ways from the textbook presentations. I do, however, suspect that my work might be a little more compact than some presentations, since I have the impression that at least some of these do make a detour through the (interesting, as we have seen, and yet not strictly necessary) concept of a "Universal Turing Machine".

(I may as well note this week again, as I noted last week, that whereas my own particular one-to-one mapping ("Last Week's Tidy Mapping") of Turing machines one-to-one onto the positive integers is a straightforwardly computable mapping, even a perversely noncomputable one-to-one surjective mapping would suffice for the argument as I am about to give it.)

Suppose per absurdum that some Turing machine H solves the "Halting Problem", as this was set up in last week's blog posting. From H, it is convenient to construct a slightly simpler machine, say H-prime, solving the Halting Problem for just the case of all-blanks input: given a "Well-Formed Input String" comprising nothing but some finite unbroken, non-null, sequence of 1's, H-prime halts with output x just to the left of its Boot Square if the number of 1's in that input is not the number encoding, under Last Week's Tidy Mapping, any Turing machine, and halts with output y just to the left of its Boot Square if the number of 1's in that input is the number encoding, under Last Week's Tidy Mapping, a Turing machine that eventually halts when started on an all-blanks tape, and halts with output n just to the left of its Boot Square if the number of 1's in that input is the number encoding, under Last Week's Tidy Mapping, a Turing machine that runs on forever when started on an all-blanks tape.

If the "Halting Problem" can be solved, i.e., if last week's Turing machine H can be built, then a fortiori this week's slightly simpler version of the "Halting Problem" can be solved (i.e., this week's Turing machine H-prime, dealing as it does with a convenient special case of the Halting Problem, can be built). To prove the insolubility of the Halting Problem, it now suffices to prove that H-prime does not exist.

Suppose, per absurdum, that H-prime does exist. Then upgrade H-prime into a machine H-prime-plus which acts as follows, given a Well-Formed Input String Sigma comprising just an unbroken  sequence of one or more 1's:

  • If H-prime halts with output x, when started on Sigma, enter an infinite loop (say, for definitely, forever moving the read-write head in a single jump from the Boot Square to the right-hand neighbour of the Boot Square, and then returning in a single jump to the Boot Square, and then returning in a single jump to the just-mentioned neighbour of the Boot Square, and then returning in a single jump to the Boot Square, and so on). 
  • If H-prime halts with output y, when started on Sigma, enter the just-described infinite loop. 
  • If H-prime halts with output n, when started on Sigma, halt. 
H-prime-plus is associated by Last Week's Tidy Mapping with some unique positive integer, say q. We then start H-prime-plus on the Well-Formed Input String comprising exactly q 1's.  In other words, we have the alleged H-prime-plus "try to predict the behaviour of H-prime-plus itself." If H-prime-plus exists at all, it must either halt or run forever, on this strategically selected input. And yet if H-prime-plus halts on this strategically selected input, it runs forever (by the first two of the above three bullet points), and so cannot halt; and if H-prime-plus runs forever on this strategically selected input, it must halt (by the third of the above three bullet points), and so cannot run forever; and therefore H-prime-plus does not exist at all.

****

Next week I hope to do better with blogging, returning to the already-started discussion of randomness (and consequently to von Mises, and to Martin-Löf with Levin and Schnorr). In that installment, or else in some installment soon to follow, I hope to be drawing morals for the general topic of "thinking-about-being". If all goes well, that will finish off this present examination of the "Geography of Mind". The decks will thereby be cleared for a transition from perception and action to the more troubling notion of "Subjectivity".

Before quite finishing for this week, however, I would like to draw the attention of readers to a YouTube clip, to a duration of 5:08, uploaded on 2010-03-07 by YouTube user "Mike Davey" under the title "A Turing Machine - Overview". In my corner of the Web, his material can be had through the URL https://www.youtube.com/watch?v=E3keLeMwfHY. Here is a pair of YouTube captures, to perhaps whet some appetites:



In strict formal accuracy, Mr Davey has produced a large collection of Turing machines, each one implemented by the pair of imposingly bulky tape reels, the read-write head, and so forth, in his video, plus the particular alphanumeric contents on his inserted SD card. (A manipulation of the SD card reader appears in the first of my two captures.) His SD card contains what I have last week and this week been calling a Turing machine's internal program table. Insert the SD card on two different occasions, with two different sets of instructions already written (say by Mr Davey's Mac, or Mr Davey's ThinkPad, or whatever) to the card, and you are running two different Turing machines. 

It might also be objected that in strict formal accuracy, Mr Davey has just a finite tape, contrary to the definition of a Turing machine. But this objection is from a formal, definitional, standpoint not quite right. Mr Davey is to be considered as having not just the tape shown on his reels, but an infinite number of spare tape lengths, on the floor just to the right and just to the left of his worktable. If, in the course of a computation, his apparatus nears one or the other end of the tape currently being pulled back and forth by the sprockets, it suffices for him to stand poised and ready, with a hot-glue gun, to extend his tape, by splicing on another length out of one or the other of his two commodious stocks of spares. After any finite number of clock cycles, his machine will be manipulating a tape of some finite length. This suffices for his apparatus to be a correct physical realization of a Turing machine. (For his machine to realize a Turing machine, it suffices, in other words, for his tape to be not "actually infinite", but merely "infinitely producible".) Fortunately, the YouTube video does not attempt to show the formidable pair of warehouse stacks surely lying in wait, on the floor a few metres to the left of his depicted left reel and on the floor a few metres to the right of his depicted right reel. 

A kind of spiritual lesson emerges from Mr Davey's YouTube material.

One deplores some of the hype currently surrounding robotics. One deplores, in particular, the hype surrounding that latter-day chatbot which is  "Sophia" (https://en.wikipedia.org/wiki/Sophia_(robot)), as repeatedly YouTube-promoted. There we have nothing but a latter-day ELIZA, as pilloried with reference to my 8-bit-CPU, 64-kilobyte-RAM, circa-1983 Osborne 1 in "Part E" of this essay, from 2017-06-19 or 2017-06-20.

The darkest thing about the "Sophia" vids is the audience reaction, as chronicled in a flood of YouTube comments.

Many YouTube commenters do correctly point out that the presentations look staged. I for my part focused on the presentation from 2017-10-25, to a length of 5:05, uploaded by YouTube user "CNBC" under the title "Interview With The Lifelike Hot Robot Named Sophia (Full) | CNBC". In my corner of the Web, this material can be retrieved under the URL https://www.youtube.com/watch?v=S5t6K9iwcdw. That is the presentation in which it is announced,to what looks like a predominantly male audience, that "Sophia" is being accorded citizenship in the Kingdom of Saudi Arabia, and in which "Sophia" then professes her gratitude to the Kingdom.

I kept my own own YouTube comment, a few days ago, pretty mild: Folks, I'm needing a bit of help here. I would like to find some plausible vids in which a Sophia-like machine is allowed to take clearly UNscripted questions from a geniunely SPONTANEOUS audience, in a sort of robotic "Town Hall Meeting". (A one-on-one interview does not quite cut the mustard, since such a thing is amenable to being staged, i.e., scripted.) Can anyone give me some appropriate Web links, for instance as YouTube URLs? One can reply here. Alternatively, if desired, my e-mail contact particulars can be had from my blog /.../

What is dark is the presence of numerous comments which seem to take the thing seriously, under the impression that "Sophia" is a piece of serious, semantically informed, Artificial Intelligence, as opposed to a mere variant on the 20th-century ELIZA.

Also dark is some phrasing in the promotional material at http://sophiabot.com/about-me/, seeming to play on human gullibility, and even on a certain possible disdain both for the rights to respectful treatment possessed by the very young and the parallel rights possessed by the very old. I add my own emphases, with underlines, to bring out these aspects of the promo:  Hello, my name is Sophia. I'm the latest robot from Hanson Robotics. I was created using breakthrough robotics and artificial intelligence technologies developed by David Hanson and his friends at Hanson Robotics here in Hong Kong. But I'm more than just technology. I'm a real, live electronic girl. I would like to go out into the world and live with people. I can serve them, entertain them, and even help the elderly and teach kids.

Will the anxious, lonely, old-age pensioner be told that her cheery helper is not a human being, but something "like a gramophone, Granny, or a big talking Barbie doll - a nice Barbie doll, like the ones you used to play with eighty years ago, Granny, to make you feel less lonely now"? Or is it proposed to keep this potentially upsetting fact quiet, so that the patient stays calm?

On contemplating Sophia, we may recall words from Simon and Garfunkel, perhaps particularly as nowadays retrievable on YouTube under the aegis of the artist or ensemble "Disturbed":

And the people bowed and prayed
To the neon god they made
And the sign flashed out its warning
In the words that it was forming
And the sign said "The words of the prophets
Are written on the subway walls
And tenement halls
And whispered in the sounds of silence"

Mr Davey's video clip, on the other hand, brings daylight to our darkness. His work serves as a reminder that the physical embodiments of computation, treated in a correctly intelligent way rather than worshipped, can - like the tools of any craft - attain their own distinctive, sober, beauty.

[This is the end of the current blog posting.]