Wednesday 29 November 2017

Toomas Karmo: Part S: Philosophy of Perception, Action, and "Subjectivity"

Quality assessment:

On the 5-point scale current in Estonia, and surely in nearby nations, and familiar to observers of the academic arrangements of the late, unlamented, Union of Soviet Socialist Republics (applying the easy and lax standards Kmo deploys in his grubby imaginary "Aleksandr Stepanovitsh Popovi nimeline sangarliku raadio instituut" (the "Alexandr Stepanovitch Popov Institute of Heroic Radio") and his  grubby imaginary "Nikolai Ivanovitsh Lobatshevski nimeline sotsalitsliku matemaatika instituut" (the "Nicolai Ivanovich Lobachevsky Institute of Socialist Mathematics") - where, on the lax and easy grading philosophy of the twin Institutes, 1/5 is "epic fail", 2/5 is "failure not so disastrous as to be epic", 3/5 is "mediocre pass", 4/5 is "good", and 5/5 is "excellent"): 4/5. Justification: Kmo had time to develop the necessary points to reasonable length.


Revision history:

All times in these blog "revision histories" are stated in UTC (Universal Coordinated Time/ Temps Universel Coordoné,  a precisification of the old GMT, or "Greenwich Mean Time"), in the ISO-prescribed YYYYMMDDThhmmZ timestamping format. UTC currently leads Toronto civil time by 5 hours and currently lags Tallinn civil time by 2 hours.
  • 20171201T0228Z/version 2.1.0: Kmo corrected a slew of mistakes or infelicities, some of which (for instance, a misstatement, thanks to clumsy typing, of the  ENIAC launch year) must count as errors of substance. - Kmo reserved the right to make tiny, nonsubstantive, purely cosmetic, tweaks over the coming 48 hours, as here-undocumented versions 2.1.1, 2.1.2, 2.1.3, ... . 
  • 20171201T0154Z/version 2.0.0: Kmo finished converting his outline into coherent-sentences prose. He reserved the right to make tiny, nonsubstantive, purely cosmetic, tweaks over the coming 48 hours, as here-undocumented versions 2.0.1, 2.0.2, 2.0.3, ... . 
  • 20171130T1735Z/version 1.1.0: Kmo expanded his outline in several comparatively minor respects.
  • 20171130T0525Z/version 1.0.0: Kmo managed to upload a fine-grained outline, pretty close to its stage of final polishing. He hoped to convert this into coherent full-sentences prose by 20171201T0001Z or 20171201T0101Z  or 20171201T0201Z or so.

[CAUTION: A bug in the blogger server-side software has in some past months shown a propensity to insert inappropriate whitespace at some points in some of my posted essays. If a screen seems to end in empty space, keep scrolling down. The end of the posting is not reached until the usual blogger "Posted by Toomas (Tom) Karmo at" appears. - The blogger software has also shown a propensity, at any rate when coupled with my erstwhile, out-of-date, Web-authoring uploading browser, to generate HTML that gets formatted in different ways on different downloading browsers. Some downloading browsers have sometimes perhaps not correctly read in the entirety of the "Cascading Style Sheets" (CSS) which on all ordinary Web servers control the browser placement of margins, sidebars, and the like. If you suspect CSS problems in your particular browser, be patient: it is probable that while some content has been shoved into some odd place (for instance, down to the bottom of your browser, where it ought to appear in the right-hand margin), all the server content has been pushed down into your browser in some place or other. - Finally, there may be blogger vagaries, outside my control, in font sizing or interlinear spacing or right-margin justification. - Anyone inclined to help with trouble-shooting, or to offer other kinds of technical advice, is welcome to write me via Toomas.Karmo@gmail.com.]

A spasm of emotional illness makes me late with blogging this week. In what is left of the week, I cannot neglect my maths studies. 

I may as well report in so-to-speak parentheses here, since I might thereby in one or another way help one or two readers, what those studies currently comprise. At the moment, I am working, slowing and painfully, from the fourth chapter of Michael Spivak's terse, and universally dreaded, Calculus on Manifolds: A Modern Approach to Classical Theorems of Advanced Calculus (Addison-Wesley, 1965). At the moment, my core problem is the following: given positive integers n and k, and given a vector space V, of dimension n, over the reals, find a basis for the vector space Ω-superscript-k(V)which is the ensemble of alternating k-tensors on V (where, for example, if V is the usual "Euclidean real 4-space" of quadruples-of-reals, then the determinant, as a four-argument function on the four columns of a 4-by-4 real matrix, is an alternating 4-tensor on V). 

Why tackle such a sterile-seeming core problem? What can this have to do with my actual life mission - anchored as much of it is on the physics side of the borderland where physics shades into philosophy, and anchored as much of it also is in efforts to address what I predict will some decades from now be an encroaching Dark Age?

For one thing, tensors are necessary in putting the mathematical physics of radio, notably Maxwell's equations, into the proper Special-Relativity framework. Here is something with ultimately even severely practical implications, as we seek to lay correct conceptual foundations for a survival technology, radiotelegraphy, in our probable impending Dark Age. (Even radiotelegraphy, let alone the more laborious fields of radiotelephony, television, and TCP/IP-supporting radio, bristles with subtleties, for which mathematics is relevant - not just in the design of suitably stable oscillators, or again of suitably low-loss feed lines, but also something relevant to even humble improvised emergency stations out in the field, namely the design of antennas.)

There is also a second thing, again ultimately connected with the foundations of  eventual Dark-Ages engineering. Thermodynamics involves not just energy, but entropy. Entropy is introduced with a bit of hand-waving in the first-year physics textbooks, where authors note that an ideal-gas Carnot engine traversing a closed curve on the pressure-against-volume plot generates an integral, its integrand naively written as ratio-of-transferred-heat-to-temperature-at-which-transfer-is-being-made, with the pleasing value zero. It sounds plausible from such a first-year discussion that there should be an intrinsic property of the Carnot engine's consignment of ideal-gas working substance  (a property to be called "entropy"), which is conserved when the engine is taken around a closed curve in the pressure-versus-volume plot.

Such a closed-curve plot is what we get when the Carnot engine (a) takes heat in from its hot reservoir, moving along an isotherm on its pressure-versus-volume plot while pushing its piston out; and (b) continues to push its piston out when moving along a zero-heat-transfer curve, or "adiabat", as its working substance cools (for the adiabat, the engine cylinder is enclosed in an insulating jacket); and then (c) drives its piston in while rejecting heat to the cold reservoir, along a cooler isotherm; and finally (d) (with the insulating jacket once again applied) moves up a (warming) adiabat, with the piston continuing to compress the working substance until  the engine is back at its start-of-cycle volume and pressure. With the piston back at that pressure-and-volume starting point, the engine is to be regarded as having executed "a full cycle". It would be natural to arrange a crankshaft so that one such piston oscillation corresponds to a 360-degree rotation of a drive shaft. 

But how to make the introduction of entropy mathematically rigorous? Here, alas, it becomes necessary to ponder, in a context of tensors, such puzzling things as "differential forms" that are perhaps "merely closed", and perhaps "not merely closed but even exact", and ultimately to construct bivariate-integrand integrals involving pressure and volume. One then recalls, with a special horror, a sinister professorial pronouncement, unclear to some of us students in the day, from the 1991-September-through-1992-May MAT257 of the University of Toronto: "The natural objects of integration are not in a naive general sense functions but, more subtly, are differential forms." And one also recalls, in the rising horror, the warning of Bell Labs associate-or-alumnus Dr John S. Denker, from material on his https://www.av8n.com/physics/, that it is differential forms that are the objects naturally housed within the integrals of conceptually clean thermodynamics.

Lest anyone find this academic, I remark that in the probable coming Dark Age, not radiotelegraphy alone but also Sterling engines might emerge as a survival technology. In both cases, the underlying mathematics will have to be clearly taught in what universities remain, perhaps drawing on remaining Web tutorial materials such as I myself eventually hope to be writing, perhaps right here on blogspot. (Some of the prospective Sterling engines will no doubt be driven by solar power, as in the case of the already-marketed "Sunpulse 500" (http://www.sun-orbit.de/). In other cases, heat will no doubt come from burning biomass. We may well, in particular, picture Dark Age cities driving dynamos with stationary Sterling engines, to feed overhead catenaries for light-rail transprot, as the internal-combustion petrol engine is finally recognized for the environmental dead end that it is. There are our kinda-sorta humane Dark Ages, folks, if we are willing to make appropriate anticipatory efforts now: lots of village-level radiotelegraph offices, once the Internet proves too intricate and too costly to maintain, and likewise lots of cross-country tram lines. With cheap telegrams and cheap tram tickets, life might still be livable.)

So much for the universally dreaded Spivak, then, and for my necessary concentration on his treatment of tensors, and for the universally dreaded idea of "differential forms as the natural objects of integration". Spivak being what he (alas) is, I must dispatch the blogging as briskly as I can this week. Instead of tackling last week's homework, I will this week make just two contextual, in other words two general-background, remarks on Turing machines. My remarks are intended to be in some modest ways illuminating for a few potential readers - if for no others, then at least for that (percentually non-negligible?) segment of my (tiny) readership which either is not at all conversant with, or else is only mildly conversant with, formal logic.

****


(1) Prof. A. Turing's model was meant to capture the essence of computing, in the sense of the following "Thesis": any computation that could be performed (perhaps efficiently, i.e., perhaps in some very small number of clock cycles) on anything reasonably called a "deterministic computer" could be performed also (perhaps less efficiently, i.e., perhaps in some larger number of clock cycles) on a Turing machine. 

The Thesis might at first seem surprising. A Turing machine is at first glance architecturally unlike the computers commercially available. The infinite tape in a Turing machine is a counterpart of today's multi-faceted memory. (The CPU-chip registers are one facet; the CPU-chip caches are another; the motherboard RAM is another; and finally there are the internal hard drives, with broadly analogous "media" such as removable SD cards and removable USB sticks.) In today's machines, however, this multi-faceted memory stores both data and the successive instructions of the program. In a Turing machine, by contrast, program and data are sharply separated. In a Turing setup, data get fed into the processing unit from the tape, and get written to the tape by the processing unit, whereas the program is kept not on the tape at all, but in a conceptually separate "internal program table". The Turing-machine situation is unlike what we have with a modern deterministic computer, from the 1949 Cambridge University EDSAC 1 onward, but instead resembles the simpler situation of the wartime British "Colossus" codebreaker.

Another machine in the same class as the Colossus, in the sense that its program is kept separate from its data, is the 1946-vintage American ENIAC.

Colossus and ENIAC indeed logically resemble also my own first computer, the "Digi-Comp 1" which I respectfully requested of my dear parents as a 1966 Christmas present. (I had duly scrutinized that wish-book which was the Eaton's or Sears 1966 Christmas catalogue, and must have found Digi-Comp 1 more appealing than the offered microscopes and chemistry sets.)

Here is either the 1960s Digi-Comp 1 or a faithful 21st-century recreation of it, as depicted at http://willware.blogspot.ca/2013/03/the-digi-comp-1-rides-again.html:



(As is usual with blogger-cum-blogspot, the image can be enlarged with a mouse-click.)

It might perhaps perhaps already be guessed from the image that programming was achieved by putting white plastic tubes onto the various pegs of the three horizontal sliders. The sliders then acted as a three-bit memory, correctly described in the accompanying documentation as a triple of "flip-flops". They moved back and forth when a hand actuator (the white handle, appearing on the lower right in this image) was pulled back and forth by the computer operator. Corresponding to a clock cycle was a full oscillation of this handle. (Perhaps - my recall is uncertain - the handle had to be pulled all the out out to the right, then pushed all the way back in to the left.) The placement of the white plastic tubes on duly selected pegs, and the various slider-to-slider linkages achieved by the various spring-loaded metal rods as they encountered white plastic tubes, gave the effect of Boolean logic gates.

Various pleasant things could be done. For instance, an appropriate deployment of white tubes on red slider pegs would make the machine count, as one pumped its clock, from 000 (in decimal notation, 0) up to 111 (in decimal notation, 7). And I am sure, although to my regret I do not possess adequately vivid direct recall on this point, that the possibilities for conditional execution were varied enough to permit the machine to be sent into some kind of infinite loop.

In today's machines, it is possible in principle - however dirty a kludge this would be deemed in actual programming practice - for a program to rewrite itself. We surely can at least do it in machine language (in the sequence of EDSAC-style numerical instruction codes that an "Assembler" generates from those more user-frendly "Assembly Language" instructions - such as, I would imagine it, Decrement Register A by one, or again Load into Reigster B the data in the motherboard-RAM location presently specified by the numerical contents of Register C, or again If the number in Register D is greater than or equal to the number in Register E, then execute the raw-machine-language instruction presently constituted by the numerical content of Register F.

As it might be (for a program-rewriting scenario): A program, coded in machine language as a sequence of 64-bit words, is stored in motherboard-RAM memory locations expressible in decimal terms as locations 12,548 through 98,776. Before the machine is started, data is loaded into memory locations 98,777 through 99,552. A pointer, perhaps within the CPU, indicates which instruction is currently being executed. At startup time, this is (let us say, for tidiness) the instruction at motherboard-RAM location 12,548.

At first, the machine steps through its instructions sequentially, reading instructions from locations 12,548, 12,549, and 12,550 through 12,571, and therein copying several words of data from the motherboard RAM into registers within the CPU. The ongoing parade of instructions soon causes some arithmetical operation to be performed on the contents of a couple of the registers, and some data to be written into, say, locations, 100,001 and 100,002. Execution next passes on the strength of an "If A, go to P; if not A, go to Q" test some distance down - to, say, instruction 44,587. For a while, things continue looking rather tidy, with more data words being read in from, say, some data locations around 99,600, and with the results of various computations being written to such locations as 100,002 (again) and 100,003 (for the first time).

Now, however, comes a kludge which we would not expect in the polite world of C or Java, but which is perhaps less unexpected in the rude, crude, EDSAC-flavoured world of raw machine language. (Admittedly, on 2017-era hardware, we might have to start "running our machine code on a bare machine",  in other words might have to forego the special memory protections prudently enforced on us by the usual operating systems, such as Microsoft, MacOs, and GNU/Linux.) The program performs a computation - under the directions of, as it might be, the raw-machine-code word stored at motherboard-RAM location 77,770 - and writes the result into location 77,769, thereby overwriting its own program. Soon enough, execution passes, thanks perhaps to some "If A, then go to P; if not Q, go to Q" test, to location 77,769. - The upshot of the dirty kludge is that the machine is executing an instruction which it wrote for itself, and which was not present in memory at the instant of machine startup.

So might it not be that an ESCAC-onward machine, capable in principle (as just explained) of rewriting its own program when the program runs, possesses powers exceeding the powers of any Turing machine? The above-mentioned "Thesis" answers this question in the negative: No, limited though Turing machines are in comparison with the subtler architecture of 1949-vintage EDSAC-style (and indeed of 2017-vintage) machines, whatever can be done by such machines - capable, in particular, as just noted, even of rewriting their own programs, at least once we venture "close to the metal" through the direct loading of raw machine code - can be done already by a Turing machine.

It is impossible in principle to prove the "Thesis" through mathematical argument, since its notion of "anything that could reasonably be called a 'deterministic computer'" is not mathematically formal. Nevertheless, no examination of the powers of any particular deterministic computing-machine architecture, from the time the Thesis was proposed (and it goes way back, to before the war, I think in fact to a 1936 November address by Prof. A. Turing to the London Mathematical Society) right up to the present day has succeeded in delivering any counterexample.

I wrote above that a Turing machine is "at first glance" architecturally unlike the computers we nowadays use. On a second, more careful, inspection, the two classes of machine prove alike after all. The reason for this is that among the many possible Turing machines, there is the "Universal Turing Machine", say U.

U (at any rate in the details I shall adopt here - surely there will be inconsequential variations of detail in the many textbooks) is intended to be started on a tape which is blank except that (a) starting from its Boot Square, and proceeding rightward, there is an unbroken sequence of j 1's, for some j = 1, 2, 3, ... ; and (b) after the final 1 in that sequence, either (b.a) there are only blanks, or (b.b) there is, after a single blank, another unbroken sequence of exactly k 1's, for some k = 1, 2, 3, ... , and thereafter nothing but blanks.

Last week, I sketched a way of mapping all Turing machines one-to-one onto the positive integers. In the same spirit, we can find a way of mapping all possible Turing-machine inputs (each of these is some finite sequence of the twenty-nine symbols BLANK, 0, 1, a, b, ... , z) one-to-one onto the non-negative integers. - To make this tidy, we may as well take it here that the special case of an input comprising only blanks (the "null input") is mapped to the non-negative integer 0.

U works as follows: where j is the (unique) positive integer representing the particular Turing machine M, and k is the (unique) non-negative integer representing the particular input I, U eventually writes, in some convenient pre-specified format in some convenient pre-specified area of its tape, the same output as would be generated by starting M with input I. If, in particular, M never stops, and M writes some infinite output sequence to the tape, then U writes that same output sequence. 

Then U is in the following sense like 1949-vintage EDSAC 1, as opposed to the wartime Colossus, the 1946-vintage ENIAC, and indeed the Chrismas-of-1966 Digi-Comp 1: (i) on one and the same piece of architecture (its tape, as opposed to its internal program table) is both a program (the initial string of j 1's, for some positive integer j) and some "data" (the initial string of k 1's, for some non-negative integer k); and (ii) U has the mission of "operating according to the program on the given data".

****

(2) Last week, I wrote the following: I have not myself taken the trouble to review in recent years the proof that no Turing machine solves the "Halting Problem". But I do know, from my 1970s or 1980s studies (a) that the proof is not particularly long, and (b) that the proof uses an argument much in the spirit of the Cantor argument establishing that there is more than one "order of infinity" /.../.

Having this week managed to construct a version of the proof without cheating through looking it up, I may as well try to be helpful by showing my modest work. Surely my own work differs only in inconsequential ways from the textbook presentations. I do, however, suspect that my work might be a little more compact than some presentations, since I have the impression that at least some of these do make a detour through the (interesting, as we have seen, and yet not strictly necessary) concept of a "Universal Turing Machine".

(I may as well note this week again, as I noted last week, that whereas my own particular one-to-one mapping ("Last Week's Tidy Mapping") of Turing machines one-to-one onto the positive integers is a straightforwardly computable mapping, even a perversely noncomputable one-to-one surjective mapping would suffice for the argument as I am about to give it.)

Suppose per absurdum that some Turing machine H solves the "Halting Problem", as this was set up in last week's blog posting. From H, it is convenient to construct a slightly simpler machine, say H-prime, solving the Halting Problem for just the case of all-blanks input: given a "Well-Formed Input String" comprising nothing but some finite unbroken, non-null, sequence of 1's, H-prime halts with output x just to the left of its Boot Square if the number of 1's in that input is not the number encoding, under Last Week's Tidy Mapping, any Turing machine, and halts with output y just to the left of its Boot Square if the number of 1's in that input is the number encoding, under Last Week's Tidy Mapping, a Turing machine that eventually halts when started on an all-blanks tape, and halts with output n just to the left of its Boot Square if the number of 1's in that input is the number encoding, under Last Week's Tidy Mapping, a Turing machine that runs on forever when started on an all-blanks tape.

If the "Halting Problem" can be solved, i.e., if last week's Turing machine H can be built, then a fortiori this week's slightly simpler version of the "Halting Problem" can be solved (i.e., this week's Turing machine H-prime, dealing as it does with a convenient special case of the Halting Problem, can be built). To prove the insolubility of the Halting Problem, it now suffices to prove that H-prime does not exist.

Suppose, per absurdum, that H-prime does exist. Then upgrade H-prime into a machine H-prime-plus which acts as follows, given a Well-Formed Input String Sigma comprising just an unbroken  sequence of one or more 1's:

  • If H-prime halts with output x, when started on Sigma, enter an infinite loop (say, for definitely, forever moving the read-write head in a single jump from the Boot Square to the right-hand neighbour of the Boot Square, and then returning in a single jump to the Boot Square, and then returning in a single jump to the just-mentioned neighbour of the Boot Square, and then returning in a single jump to the Boot Square, and so on). 
  • If H-prime halts with output y, when started on Sigma, enter the just-described infinite loop. 
  • If H-prime halts with output n, when started on Sigma, halt. 
H-prime-plus is associated by Last Week's Tidy Mapping with some unique positive integer, say q. We then start H-prime-plus on the Well-Formed Input String comprising exactly q 1's.  In other words, we have the alleged H-prime-plus "try to predict the behaviour of H-prime-plus itself." If H-prime-plus exists at all, it must either halt or run forever, on this strategically selected input. And yet if H-prime-plus halts on this strategically selected input, it runs forever (by the first two of the above three bullet points), and so cannot halt; and if H-prime-plus runs forever on this strategically selected input, it must halt (by the third of the above three bullet points), and so cannot run forever; and therefore H-prime-plus does not exist at all.

****

Next week I hope to do better with blogging, returning to the already-started discussion of randomness (and consequently to von Mises, and to Martin-Löf with Levin and Schnorr). In that installment, or else in some installment soon to follow, I hope to be drawing morals for the general topic of "thinking-about-being". If all goes well, that will finish off this present examination of the "Geography of Mind". The decks will thereby be cleared for a transition from perception and action to the more troubling notion of "Subjectivity".

Before quite finishing for this week, however, I would like to draw the attention of readers to a YouTube clip, to a duration of 5:08, uploaded on 2010-03-07 by YouTube user "Mike Davey" under the title "A Turing Machine - Overview". In my corner of the Web, his material can be had through the URL https://www.youtube.com/watch?v=E3keLeMwfHY. Here is a pair of YouTube captures, to perhaps whet some appetites:



In strict formal accuracy, Mr Davey has produced a large collection of Turing machines, each one implemented by the pair of imposingly bulky tape reels, the read-write head, and so forth, in his video, plus the particular alphanumeric contents on his inserted SD card. (A manipulation of the SD card reader appears in the first of my two captures.) His SD card contains what I have last week and this week been calling a Turing machine's internal program table. Insert the SD card on two different occasions, with two different sets of instructions already written (say by Mr Davey's Mac, or Mr Davey's ThinkPad, or whatever) to the card, and you are running two different Turing machines. 

It might also be objected that in strict formal accuracy, Mr Davey has just a finite tape, contrary to the definition of a Turing machine. But this objection is from a formal, definitional, standpoint not quite right. Mr Davey is to be considered as having not just the tape shown on his reels, but an infinite number of spare tape lengths, on the floor just to the right and just to the left of his worktable. If, in the course of a computation, his apparatus nears one or the other end of the tape currently being pulled back and forth by the sprockets, it suffices for him to stand poised and ready, with a hot-glue gun, to extend his tape, by splicing on another length out of one or the other of his two commodious stocks of spares. After any finite number of clock cycles, his machine will be manipulating a tape of some finite length. This suffices for his apparatus to be a correct physical realization of a Turing machine. (For his machine to realize a Turing machine, it suffices, in other words, for his tape to be not "actually infinite", but merely "infinitely producible".) Fortunately, the YouTube video does not attempt to show the formidable pair of warehouse stacks surely lying in wait, on the floor a few metres to the left of his depicted left reel and on the floor a few metres to the right of his depicted right reel. 

A kind of spiritual lesson emerges from Mr Davey's YouTube material.

One deplores some of the hype currently surrounding robotics. One deplores, in particular, the hype surrounding that latter-day chatbot which is  "Sophia" (https://en.wikipedia.org/wiki/Sophia_(robot)), as repeatedly YouTube-promoted. There we have nothing but a latter-day ELIZA, as pilloried with reference to my 8-bit-CPU, 64-kilobyte-RAM, circa-1983 Osborne 1 in "Part E" of this essay, from 2017-06-19 or 2017-06-20.

The darkest thing about the "Sophia" vids is the audience reaction, as chronicled in a flood of YouTube comments.

Many YouTube commenters do correctly point out that the presentations look staged. I for my part focused on the presentation from 2017-10-25, to a length of 5:05, uploaded by YouTube user "CNBC" under the title "Interview With The Lifelike Hot Robot Named Sophia (Full) | CNBC". In my corner of the Web, this material can be retrieved under the URL https://www.youtube.com/watch?v=S5t6K9iwcdw. That is the presentation in which it is announced,to what looks like a predominantly male audience, that "Sophia" is being accorded citizenship in the Kingdom of Saudi Arabia, and in which "Sophia" then professes her gratitude to the Kingdom.

I kept my own own YouTube comment, a few days ago, pretty mild: Folks, I'm needing a bit of help here. I would like to find some plausible vids in which a Sophia-like machine is allowed to take clearly UNscripted questions from a geniunely SPONTANEOUS audience, in a sort of robotic "Town Hall Meeting". (A one-on-one interview does not quite cut the mustard, since such a thing is amenable to being staged, i.e., scripted.) Can anyone give me some appropriate Web links, for instance as YouTube URLs? One can reply here. Alternatively, if desired, my e-mail contact particulars can be had from my blog /.../

What is dark is the presence of numerous comments which seem to take the thing seriously, under the impression that "Sophia" is a piece of serious, semantically informed, Artificial Intelligence, as opposed to a mere variant on the 20th-century ELIZA.

Also dark is some phrasing in the promotional material at http://sophiabot.com/about-me/, seeming to play on human gullibility, and even on a certain possible disdain both for the rights to respectful treatment possessed by the very young and the parallel rights possessed by the very old. I add my own emphases, with underlines, to bring out these aspects of the promo:  Hello, my name is Sophia. I'm the latest robot from Hanson Robotics. I was created using breakthrough robotics and artificial intelligence technologies developed by David Hanson and his friends at Hanson Robotics here in Hong Kong. But I'm more than just technology. I'm a real, live electronic girl. I would like to go out into the world and live with people. I can serve them, entertain them, and even help the elderly and teach kids.

Will the anxious, lonely, old-age pensioner be told that her cheery helper is not a human being, but something "like a gramophone, Granny, or a big talking Barbie doll - a nice Barbie doll, like the ones you used to play with eighty years ago, Granny, to make you feel less lonely now"? Or is it proposed to keep this potentially upsetting fact quiet, so that the patient stays calm?

On contemplating Sophia, we may recall words from Simon and Garfunkel, perhaps particularly as nowadays retrievable on YouTube under the aegis of the artist or ensemble "Disturbed":

And the people bowed and prayed
To the neon god they made
And the sign flashed out its warning
In the words that it was forming
And the sign said "The words of the prophets
Are written on the subway walls
And tenement halls
And whispered in the sounds of silence"

Mr Davey's video clip, on the other hand, brings daylight to our darkness. His work serves as a reminder that the physical embodiments of computation, treated in a correctly intelligent way rather than worshipped, can - like the tools of any craft - attain their own distinctive, sober, beauty.

[This is the end of the current blog posting.]



No comments:

Post a Comment

All comments are moderated. For comment-moderation rules, see initial posting on this blog (2016-04-14).