Earlier this year / Next
So here I am back up, computer back on, reacting to "A New Way to Squash Bugs." Let me be perfectly clear: Author Charles Scalfani's functional programming does no such thing. "Functional programming" means subroutines without side effects, no changes in the computer state that persist after the subroutine exits. Mathematical functions work this way, but not much else. Any benefit he may get from trying to force-fit other kinds of problems into the Procrustean functional bed is probably the same benefit realized from trying to force-fit non-object-oriented problems into an OOPS bed: You must think extra hard to make it work at all, and it is the extra thinking that eliminates the bugs, not the tool.
He credits the programming language Haskell, created in 1990 with being "the first purely functional language to become popular." I know better, LISP was popular for AI applications for the previous three decades, and it was originally created as a purely functional language. It didn't stay that way, because too many programs need state and side-effects.
My PhD dissertation in 1985 invented and described "Transformational Attribute Grammars" which were in effect a purely functional language. You can download a working TAG self-compiler from my website, which implements the insights of my dissertation. It is still a purely functional programming language -- well, not quite: I needed a global variable to hold the generated output code, and I could have simulated it in a state variable carried around everywhere, but that was much too slow. LISP probably had the same experience. Both Haskell and Scalfani's own functional programming language "PureScript" can read and write data files, which are side effects. If nothing else, you can write your global variables to a file and read them later. It's even slower than carrying around a giant state variable, which means there are at least two different ways to code non-functional subroutines in what Scalfani claims is a purely functional programming language. It's just slow and difficult.
Scalfani argues his case for a purely functional programming language by analogy with the banishment of GOTO from so-called "structured" programming languages (pretty much everything today), but he got his facts wrong. I code in Java every day, and I use GOTOs all the time, except they are spelled "break" and "return" and "throw." They are not quite as anarchic as the old GOTO, but they do just about everything I need in a GOTO, and the rest I program around using global state variables. And yes, my programs are hard to read and hard to maintain, but it's not the fault of the GOTOs, it's because I build very complex programs that most programmers couldn't even begin to do.
The other problem Scalfani thinks his functional programming solves is null pointers. Maybe he has banished null pointers as such, but he has not solved the problem, only made it worse. Null pointers (data that has not been given a legitimate value) are not the problem, sloppy code is the problem, and the solution he offers in his goofy PureScript syntax is just as easily implemented in readable Java code -- I know, because my T2 (Java) compiler does it essentially the same way, but I need to ask for it each time. I could have made it default on, but I aimed my programming language at a much broader class of programs than Scalfani contemplates.
Most problems that come before a programmer have occasions to refer to invalid data. For example, you'd think that numerical data is just numbers -- except when the input failed or you tried to divide by zero, stuff like that. Before Professor Kahan made his case to the IEEE standards committee, numbers were just numbers, and Kahan could take a standard off-the-shelf commercial 12-digit calculator and do a simple (standard) 30-year mortgage calculation that was wrong by whole dollars. Kahan's whole career was dedicated to getting accurate, correct calculations, and his "NaN" (Not a Number) innovation wasn't the most important in our* standard (correct rounding was far more critical), but it was part of what made our standard universally accepted today. Null pointers are a useful and important (and easily tested, no extra hardware needed as for NaNs) way to represent invalid data. Take that away and programmers will get around the limitation by building more complex data structures with a flag that means "invalid" or co-opt one of the legitimate data values to mean "invalid" and you will have exactly the same (or worse!) bugs, but the compiler and run-time cannot help you find them. As I said, it's not the functional language that offers this help, but a simple use of data flow analysis already done in most compilers, including my own Java.
I have spent maybe five or six years teaching programming, and an order of magnitude longer using -- and writing compilers for -- a variety of languages, so I understand the problems Scalfani thinks he is solving. He has one idea correct: we want the compiler to catch as many programmer errors as possible, but that does not necessarily require a more limited language like functional programming. He just doesn't know what he doesn't know.
IEEE Spectrum used to print author contact information, but not this issue. They also have no feedback (letters) page, so I have no way to tell Scalfani what I think of his idea and his product. He probably wouldn't want to know anyway. He has a goofy toy language that solves toy problems by making them harder to program (so you think harder, and therefore make fewer mistakes), and being told it's only a toy would be disaffirming, and the American edu-factories do nothing so well as teach little kids the importance of unearned self-esteem (at which we rated highest, while being dead last among industrial nations in actual math and science). Whatever.
* Our standard -- Yes, I was on the original IEEE 754 Floating Point Standard draft committee with Dr.Kahan, his student Jerome Coonen, and John Palmer, an engineer at Intel. We were sometimes called "the Gang of Four" and the draft standard was first composed on my computer.
The current (November: they generlly arrive around the middle of the month on their cover, so December isn't due for a couple weeks or so) ComputingEdge, the last three articles address different facets of the problem I have been shouting from the rooftops for the last three or four years, that Neural Nets (NNs) as so-called "Artificial Intelligence" are not very intelligent. No, they aren't saying exactly that -- yet.
What they are doing is responding to the increasing public outcry that NNs are opaque and cannot be trusted to give answers as intelligent as (smart, professional) human beings. The NN people are not yet ready to give up their Religion ("Believing what you know ain't so," or more precisely, believing what is contrary to objective data), but they are reacting defensively.
The first of these three articles is an interview with Fritz Kunze, one of the luminaries in the Lisp community, largely about the history of Lisp as used in what used to be called (rather more accurately) "Artificial Intelligence" before NNs took over the name. This is relevant because most of the "Good Old Fashioned AI" (GOFAI, pronounced "Go-fye") was programmed in Lisp. This was the stuff where if a human did it, it would be called intelligent, and it could be explained in terms that are recognizably intelligent. NNs on the other hand, make their decisions on the basis of similarity to averages accumulated over thousands or millions of random data, where the averages are not known even to be measuring what the label says the data is. So you can get a NN-based system seeing a "dog and cat playing frisbee" in a photograph of three cupcakes, or "gorillas" in a selfie of two African-Americans. Yes, both of those happened. Fritz probably assumes that NNs are giving valid results, but somebody knows that the inferential logic of GOFAI is going to be needed to explain and validate NNs, and the interest in Lisp is probably already resurging. Otherwise, why here and now?
The third item is a short page and a half item by a (female) professor at a second-tier university where they should know better, basically poo-pooing the problem on the supposition that cleaner data will solve it. Again, this is reactionary.
The middle piece, "Knowledge-Intensive Language Understanding for Explainable AI" is much more on-target. Let me be perfectly clear, the authors, mostly with India-sounding names at a third-tier American university (and one in India), still believe that the NNs they are looking at do in fact generate valid results, and their work is intended only to produce explanations that ordinary (human) experts in the field can understand and believe. Their conclusion at the end of six pages explaining how they hope to achieve this result admits
XAI needs to offer explanations that the end-user or domain expert can easily comprehend. However, a user does not think in terms of low-level features, nor does he understand the inner workings of an AI system. Instead, he thinks in terms of abstract, conceptual, process-oriented, and task-oriented knowledge external to the AI system. Such external knowledge also needs to be explicit [and] must be infused into a black-box AI model...They are looking at NNs that work with words, not images, and probably not individual letters but numbers representing unique whole words (five digits = 16 bits in "Deep-speare"). There is no semantic information for the NNs to base their decisions on, so when they ran "First derivative saliency" experiments (and others) tracing the decisions back through the NN layers to see what contributed to the decisions -- that's the "low-level features" mentioned in the conclusion -- they couldn't make sense of it. Which is as I have been saying. So now they want to take "Knowledge-Intensive" semantic graphs (which were created by real people using human-understandable abstractions) and feed that information back through the NNs with the expectation that the NNs will then be able to explain their decisions in terms of those semantic graphs and abstractions.
Of course the NNs never operated on the basis of abstractions -- and never will -- so one of two things will happen, depending on how careful they are. Either the NNs will include the semantic graphs in their decisions and produce totally different results, which may still fail to be understandable by humans looking at the results and the "explanations," or else the NNs will generate independent results and explanations, and they will find themselves back at the starting gate. A likely third possibility, given that this is religion, not science, is they might be able (by fine-tuning their NN parameters) to get results with credible explanations, but when they release their system out to the public, the real-world data won't match the training data, and they will be embarrassed by the moral equivalents of a "dog and cat playing frisbee" or "gorillas."
Anyway, the semantic graphs are mostly programmed in Lisp for natural-language words in a dictionary, and we have no such "domain experts" to build such graphs for images. The only way we can get explainable results from image classification is if somebody figures out how human vision processes the visul data, then feeds that kind of processed data to the NNs. And it may still not work as well as humans doing the same thing. But we won't know until they try, and probably not in the next decade or two, which come to think of it, is about the same as Melanie Mitchell's prediction (see my review of her book).
PermaLink (with related
This church tries hard to meet the spiritual needs of their congregation. At least it seems that way. They conscientiously preach the Word. At least the Senior Pastor does. He doesn't seem to give much guidance to his associates. Whatever. This year they handed out bookmarks, each with a different "Spiritual Discipline" divided into exercises, one each week, a different Spiritual Discipline each month, sort of like when I was in Boy Scouts, we were to do "a Good Deed for the day," like one was enough. What a crock. This book, we are told, is the Spiritual Discipline for December, 31 chapters, three to five pages each, one each day of the month.
This book is aimed at persons other than myself. It is repetitious, maybe even ponderous. It's meant to be read slowly -- to children (explicitly) -- and maybe to Feelers who can derive warm fuzzies from it. That's not me.
When I was a member of a liturgical church, I appreciated the liturgical seasons. Like the hymns, they fed my soul at that time. That was then, this is now. Apart five minutes near the beginning of each of the four Sundays in Advent, this church doesn't do liturgy. They also don't do hymns. What they do in its place is musically insipid and theologically dubious (see "Collected CCM Posts in My Blog"). So these days, when my soul feels hungry, I binge on recorded hymn tunes, like last Friday for a half-hour or more (see also "CCM: Feeding Garbage to My Soul"). But liturgy -- including Advent -- is not where I'm at this year. The imprecatory Psalms are for where I was two years ago, but not today. The Bible is like that, diff'rent strokes for diff'rent folks.
But I'll give the guy a fair hearing. I'm glad they didn't pick this year's offering from ChristianityToday (they did last year, but this year it was a big turn-off for me. Too many women. I read through it -- and hoped). God is Good. So here I am reading Tripp. I don't know where the church is getting their Advent readings this year, but I don't think it's Tripp. Whatever.
It turns out I'm not the only one thinking about it, but the only people addressing the question are offering alternative solutions based on (at least some) invalid economic assumptions. Not only is the (Bitcoin, I don't know about the others) blockchain explicitly a Ponzi scheme, where the early adopters cash out from the efforts and investments of later entries who pay increasingly more for the priviledge, but the storage problem has the same flaw. You do a transaction on the blockchain, and you pay a one-time fee, then your transaction -- or smart contract, or whatever -- is stored forever in the blockchain at no additional cost.
But storage has a real cost, a couple hundred dollars (or less) for a 1TB hard drive, under $100/TB per year in the cloud. That's going down, but not at Moore's Law rates, and this year global inflation is driving prices up faster than that. I Googled and found multiple hits (probably all from the same source) saying the current Bitcoin blockchain is about a third of a terabyte, and wild guesses varying from a few hundred to some 40,000 different unique copies, amounting to an annual cost of maybe $1 million, spread out over these users, mostly to keep other people's data secure. Bitcoin only stores a few thousand bytes per transaction (total tens of millions of transactions in the current blockchain), what will happen if it grows to billions of transactions? Maybe it can't: there appears to be some kind of built-in throttle to slow things down, but if you cannot sell your Bitcoins, their value will drop to near-zero.
I don't care about Bitcoin, their end is now in sight. I care more about the idiots promoting other blockchain implementations (like Etherium) which can do smart contracts -- that's code embedded in the transactions -- and program code obeys Parkinson's Law: there's more of it than there is space for, so the storage costs will grow astronomically. Not today when only academics are talking about smart contracts, but if it were to take off, it would start to experience the same effective (if not intentional) throttling as is built into Bitcoin, and those "smart" contracts will become stupid and unenforceable and worthless. But how many "Global South" businesses will lose their financial shirts in the process?
The people promoting blockchain as a solution for third-world countries are every bit as colonial as the European countries colonizing the global south were a couple centuries ago. They are profiting off the ignorance of people not as lucky as themselves. And because blockchain is intentionally opaque, they have suckers in the industrial nations too. P.T.Barnum's famous saying is still valid: "There's a sucker born every day."
Obviously I don't want to be one of the suckers, but I also don't want
to be one of the perps, it's immoral to benefit from keeping other people
ignorant. I guess that's why I'm in the education business.
Blockchain, as you know, is a technology intentionally obfuscated to hide the fact that it's originally a Ponzi scheme expected to grow into a medium for liars and thieves to do their wicked business without getting caught by the money trail. The money trail is more exposed than anybody realized. The audit trail is (somewhat) public information, where the identities of the perps is encrypted in the blockchain, but not at the cash-out terminal. The government needs to work a bit harder to catch the Bad Guys, but they mostly succeed. Mining new blockchain tokens is now exceedingly expensive (reported to have contributed 20% of new atmospheric carbon), so the Ponzi benefits are largely now expired. All that's left is a highly volatile medium of currency...
...And so-called "smart contracts." A contract is an agreement between (usually two) parties, typically to perform some task and get paid for it, one or both parts to occur at some later date. As that ancient Jewish philosopher once said, "The heart is deceitful above all things, and desperately wicked." Sometimes people want to back out of their side of the contract (especially if they already got their benefit of the bargain, but also when the other party suckered him into an unfair deal). So it goes to court. Or if the other party is the Mafia, they come and "break-a the kneecaps." In third-world situations (think: "Global South") you can bribe the judge and/or the local sheriff to get out of doing your part. "Baksheesh," my missionary friend said, rubbing his thumb and fingers together. A smart contract is a computer program embedded in a blockchain, which presumably carries out the terms of the contract. And we all know that computer programs never fail, right?
Worse, these programs are one-off code, so it never gets debugged until it's time to perform, and if it fails, "Too bad, so sad!" The two languages this article mentions for programming these smart contracts never show up in any list of popular programming languages (except blockchain-specific lists), so the chance of getting a competent programmer to do this is extremely small -- and pricey: average annual salaries of $158K and $250K were mentioned in the article; salaries like this are certainly not available in underdeveloped countries with infrastructure so bad as to make blockchain seem desirable. And even if it performs as intended, the payment is in blockchain crypto-currency, which leaves (so we have been told, except for drug and child porn busts) no audit trail, and may have a radically different value when it comes time to pay the bill, compared to what the contractors intended when it was negotiated.
It gets still worse: because these are non-standard programming languages, there is no community of language watchdogs and no alternate compilers, and therefore no protection against corrupted compilers, which in an environment intentionally obfuscated like blockchain, probably is harder for the business client to detect than a corrupt judge in the national court system.
The authors admit that "Most smart contracts require access to data related to real-world conditions... Oracles are the only mechanism..." but not protected by the blockchain encryption, and that there may be different results from different oracles. Why should anybody want to trust such oracles? How do we know they can be trusted? We don't.
It seems to me that there is far less in smart contracts to inspire confidence than in a national court system that, however corrupt it may be, is at least reasonably visible to far more vested parties than the intentional opacity of blockchain.
Hmmm, I see the next article is also promoting blockchain, this time as a way of protecting data privacy. As if. Anything you put on the internet is in the public view, and you are a fool if you believe otherwise.
Internet privacy statements -- and anything (such as blockchain) purporting
to protect it -- are probably about as worthless as the annual (required
by law) notices sent out by every American bank, six or ten pages of tiny
gray print that sounds protective enough -- until you get somewhere past
the middle, where you are least likely to see it, and they add the line
"or as permitted by law." Which means nothing more or less than they won't
do anything illegal (which they won't anyway, because bank managers don't
want to go to jail), and the "or" means that all the rest of the gray text
you just waded through is completely nullified and the whole document says
The heroine (female author = female lead) is a successful playwright who doesn't need to live on her vast inherited wealth. But the love of money is the root of all kinds of evil, so here comes an actor who does a good job of seducing her into marriage. The writers understand this "love" stuff and everybody plays it well. The audience believes it. Against her lawyer's advice, the starry-eyed new bride wants to change her will to leave her wealth unconditionally to the otherwise penniless new husband, as a reward for his making her happy. "Unconditional love" is the mantra of Relationshipists and Feelers everywhere.
Except it cannot be unconditional, not in this story, not ever. We soon learn that husband's former lover has come across the country to recapture her own romantic joy, and before the new will can be signed, they see the lawyer's draft and plot to kill the heiress, who learns of the plot on her dictating machine accidentally left running, whence the movie (and book) title.
All this woman's joy and romance is destroyed in an instant by the unpleasant truth that she was lied to. Truth is the unstated moral absolute that trumps all the affirmation built into romantic love. She still had those weeks of joy while he romanced her, but knowing it was fake -- or at least that's what he told his prior lover, but that could also be a lie -- just the thought that it could have been fake made the affirmation worthless (see my essay "A Case Study in Moral Ambiguity").
This is not a guy flick. It is billed in the top Google hits as "film
noir" because it's not really a chick flick either. It's an honest look
at the risks in "unconditional love" by a woman who wants to believe
in it, but can't. Guy flicks don't have that problem, Truth, Justice,
and Duty are moral absolutes, so there is never this conflict.
"If It Ain't Broke, Don't Fix It"Zoom wasn't broken, but there were four things I wished they had:
1. An audible chime when something happened I need to know about, like a new student logged in, or somebody asked for helpNone of these got fixed, but it was usable as is -- so they fixed it anyway, and now it's broken (or at least slower and harder to use than before).
2. The tab key didn't work properly when creating windows, so my startup process took longer than necessary (and time is money)
3. No way to automatically capture chat to a file (I sometimes forget to save it)
4. It degraded poorly when the bandwidth was limited
The older version announced yesterday that it could no longer run, just as I was trying to get things up for my early-morning student(s). So it installed the new version complete with a new "agreement" that was previously unnecessary, and then blocked starting on a "security code" sent to somebody else's computer, then I couldn't find the features I use every day -- they are now hidden in an obscure popup.
Things that are better with this new "upgrade": Nothing.
Thing that are now worse or completely nonfunctional:
a. It no longer remembers the window size and position next time I come to a new "room"How is my life better with this upgrade? It's not. So why are they forcing this on me? They are a monopoly, they can do anything they want, and what can we do about it? There are no competitors to go to.
b. The clutter of useless buttons I need to carefully avoid accidentally hitting is worse now: they removed the two buttons I use all the time, and replaced them with a couple that don't work at all
c. Now every time I go to a "room" where a student is working (or come out) it now costs me two drags to restore the window to a usable size, and two extra clicks to pop up a menu to access the chat and breakout listing selectors (which are now reduced to menu items, so it takes longer to get there then find and click on them), a net cost of some +10 seconds every time I want to do that (and time is money)
d. The un-resizable floating breakout room window that sits in front of everything (including other application windows) is now bigger, so it steals more screen space in a unix system that already wastes too much screen space on worthless bars of this or that or nothing.
I experienced the same problem myself not quite 20 years ago. Sitting in front of a high-voltage cathode-ray tube (CRT) monitor for 12 hours a day, six days a week since the early 1980s developed in me early-onset cataracts: I flunked the drivers license eye test (I got an optometretist to write a note saying my eyes were good enough to drive) so I went for lens-replacement surgery. The ophthalmologist was startled at the shape of my eyeballs -- after thinking about it a while, I knew exactly why: from my youth, I spend 90% of my waking hours staring at something 18 inches from my face, so my God-given eyes reshaped themselves to focus naturally to that distance. I couldn't read street signs or the blackboard in class, so my parents dutifully got me fitted for glasses every year or two -- and for a couple weeks the sidewalk was about a foot below my feet, and there were wires up there on top of the telephone poles -- and I had headaches for six months while my eyes again reshaped themselves to focus naturally at 18" again. The ophthalmologist refused my request to set the replacement lens focus to 18". It wasn't that big a deal, I found an optometrist and an oculist willing to fit me with bifocal glasses with only the top 8mm set for distance vision (the rest a positive two diopters to focus at 18"). I never went back for the other eye. I still have better night vision from the unchanged eye than the new lens, probably because the replacement lens is a tiny little thing, which when the iris opens up for low-ligh conditions, more light goes around the lens than through it.
Anyway, Young's piece gave the history of prosthetic replacement limbs since the Civil War, and her own experience with a costly high-tech bionic hand that often didn't work as well as if she "left it on the couch."
There is no hint that the last two features in this issue are about the same issue, but they are. "The Radical Scope of Tesla's Data Hoard" reports the vast amount of data collected by every Tesla car and sent back to the manufacturer without any informed consent by the buyers and unsuspecting passengers. This came out in a Florida court case, some eager lawyer talked his client into suing for a defective battery allegedly resulting in a fiery fatal crash, and Tesla brought to court their records for this car, of dozens of top daily speeds in excess of 100mph in the previous year -- including a whole block of two weeks with nothing less than 85, this is in Florida where the max legal speed is 70. Tesla obviously intends to use this kind of data to train their so-called AI for infrequent events, which infrequent events are the main reason autonomous vehicles still have a safety record two orders of magnitude below ordinary human drivers. At least this article expresses concern about that data collection, which includes trip locations that could be mined back to individual cars and their owners. It clearly also saved Tesla millions of dollars in bogus damages.
The final article is an infomercial offering "Deep learning ... delivering the century-old promise of truly realistic sound reproduction." It's nothing of the sort. I was in high school when stereophonic records became popular, and eager technicians with the same idea as now expressed by this aricle in favor of "3D Soundstage" sought to repurpose the existing mono recordings as stereo. They were only the same mono sound recorded in stereo format, but with somebody cranking the balance control back and forth, so it sounded like the sound was moving around left and right. On an osciloscope the signal was still a straight diagonal line of varying angle. In true stereo you can hear the violins on the left at the same time as the brass and tympani on the right. If you are wearing headphones, the whole orchestra shifts as you turn your head. These guys are peddling the same snake oil with a lot more sophistication. Maybe the iPhone ear buds can report back to the software which way the head is turning (the authors seem to say so, but without claiming to make use of that information), but like the fake-stereo mono recordings, you cannot get data back out of the system that isn't there. If you read their informercial carefully, they aren't.
They show a graphic and photo for "Measuring Head-Related Transfer Function," based on the supposition that the shape of the head and body so affect the arrival of sound from different locations that people allegedly can locate sounds not only left and right, but also up, down -- and presumably backwards, although their diagram and photo did not show them measuring that. I did read some time ago that people can tell front to back by the quality of sound, so I tried watching for it, and found it impossible. The difference in volume (and possibly phase angle, but I could not measure that in my own experiments) gives the hearer a cone of possible locations for the source of the sound. Move your head and you get a new cone, the intersection of which with the previous cone gives you a line that is either front or back, up or down (but not both a once, which requires that you move your head a second time in a different axis). This I know and use to locate sounds in the house like a stray cricket or other possibly animal sounds. Maybe somebody can locate the voice of a person they know, who is behind them, by its being muffled, or maybe they only look back because they don't see them in front, but it only works with voices you already know. If the voice is naturally muffled (like they mumble), then that tells you nothing until you hear them often enough to detect additional muffling.
The point is, the different frequency components of the sound travelling from different locations on the cone determined by amplitude alone, are likely to work only when the listener already knows what they are hearing, which is a very personal thing, as the editorial and Ms.Young's article said so eloquently in their own domains. Worse, the muffling effects will be different for different head shapes and different hair styles, and their photo showed only a bald generic male half-dummy with middle-aged features. Ear cartilage never stops growing, so even a person's age matters. "Nothing About Us Without Us."
It gets worse. They use a neural net (NN) -- witness the "Deep" in the subtitle, but also they said so -- that is trained from the separate tracks of the kinds of sounds they could get separate concert tracks for (they said so) so they can identify the average instrumental sound types that the average American rock concert listener would hear, and pick those sounds out into separate virtual tracks, which they let the listener place on a virtual stage, so if you don't move your head, or if you are in a room with a half-dozen or more speakers scattered about the room and their software knows where (they didn't say so), they can make the virtual tracks seem to come from different places than you might experience with the unmodified stereo listened again without moving your head. This may be more personal (you get to place their virtual tracks, but not the actual instruments within these tracks) but it is certainly not more realistic than being there. Besides, all live concerts, the sound goes through a sound system and out to ordinary stereo speakers amplified so loud you cannot hear the original instruments, nor even half the sound from the speakers. These guys are not recreating that experience, they are destroying it.
The NN is trained on instruments they can get actual separate tracks for. If a concert has three guitarists strumming away on their own guitars, and the producers made (and released to the Soundstage team) separate tracks for the three guitars, then their NN is trained on three generic guitars (possibly distinguished on each musician's private sound board, but some guitarists change those settings on the fly) -- which it probably cannot tell apart after they are merged into the stereo recording -- and their app is not going to be able to separate those three guitars onto the virtual stage they offer their listener, no matter how "smart" their AI is, the information simply isn't there to be isolated. Worse: it won't work at all for instruments the system was not trained on (think: harpsichord or glass harmonica or piccolo trumpet). They claim it works for spatially separating multiple speakers in a teleconference, but it was not trained on those speakers, so they need a different algorithm, one that somehow captures the voiceprint of each speaker on the fly, then isolates it algorithmically. Maybe they did that, but they didn't say so.
I still think it's snake oil.
It's not a new idea, King Lemuel expressed the same idea more than two thousand years ago, mostly by telling us that intoxicants are given by God to enable the hopeless to escape their misery, but "it is not for kings [and] rulers" who are responsible for the interests of other people, lest they lose sight of their responsibility "and deprive all the oppressed of their rights."
It's worse than that, Srinivasan's wishful thinking is futile, and if he doesn't know it, he's a worse fool than his many followers. There is no exit, no way out, no -- as Margaret Thatcher was quoted in the same rag a couple months earlier -- "There Is No Alternative" ("TINA"). What Srinivasan is doing is giving an intoxicating drug to the hopeless masses. He's not himself trying to escape, he's an entertainer, the barkeep, the drug dealer, that's how he makes his living.
How do I know? He has a Stanford education, he's smart enough to know there's no escape. You can get people stoned out of their minds, and they will pay you for the priviledge. It's an income in an economy that has automated all the mediocre workers out of a job, and entertainment is all that's left (if you are good at that). And programming, but he doesn't have that kind of horsepower.
He can sell a fiction (I tried, but did not succeed, the Good Lord has better things for me to do). The fiction is escape, what everybody gets drunk or stoned to achieve. But there is no escape. You still have to eat. You may imagine that you are inventing a new virtual country -- and what a beautiful drug the imagination is! -- but the other 98% of your body still lives in meatspace, it still needs to get real calories and real proteins, and somebody not very far from your real body needs to carry out the trash, the real one-time wrappers those calories and proteins came in. There's no way to create real wealth in virtual space, the most you can do is move it around in a zero-sum game, move it from people who dig real wealth out of the real ground, to people who don't. Including to people like Srinivasan who are selling you the wish.
The next article is about real people really escaping real death in Afghanistan, and the people helping them. No virtual reality for them, they are really there getting shot at with real bullets by people so ashamed of what they are doing to other people, they must hide their faces in shame.
Then another pretense, Live-Action Role Playing, a whole article about people so bored of pretending to be someone other than whom God made them to be, that now they are pretending to be trying to be otherwise, to escape. I read some of it, then gave up. I'm not into pretense. I'm not into fighting God, it's like fighting the nurses at the hospital: You. Will. Lose.
The next feature was about some novelist trying to invent her own world.
Maybe she's just an entertainer, like Srinivasan, I don't know. She's certainly
not the first, because pretty much all "chick-lit" is selling fantasy,
a fake world to make up for the dreary life people who cannot create wealth
live in. I skip over those novels in the library. I skipped over the article
about her. Articles by female writers are harder to believe than the alternative.
Articles by female writers about female writers are even harder
to believe. Articles by female writers about female writers trying to pull
off a doomed Srinivasan-style escape are surely the worst of all.
Earlier this year / Next
Complete Blog Index
Itty Bitty Computers home page