Cutting the Hype

These days it is all but impossible to open the Net without stumbling over zillions of references to artificial intelligence (AI). What it is; the things it can and will do; the good things it will bring about; the fortunes it will make for those willing and able to make the fullest use of it; and, above all, the disasters which, thanks to God the Computer and His human acolytes, are just around the corner and, unless countered in time, may yet bring about the destruction of mankind.

In what follows, I want to cut the hype a little by providing a very brief list of some of those things. And, on the way, explain why, in my view, either their impact has been vastly exaggerated or they will not happen at all.

Claim: AI can and will make countless workers superfluous. The outcome will be massive unemployment with all its concomitant problems. Such as a impoverishment, a growing cleavage between rich and poor, class struggles, political upheavals, uprisings, revolutions, civil warfare, and what not.

Rebuttal: Much the same was said and written about the first computers around 1970, the first industrial robots during the 1950s, and so on backward in time all the way to the first steam engines during the first few decades of the 19th century in particular. Fear of technologically-generated unemployment may indeed be tracked back to the Roman Emperor Vespasian (reigned, 69-79 CE) who had the inventor of a labor-saving device executed for precisely that reason. World-wide, during the almost 2,000 years since then, employment has often gone up and down. However, taking 1900 as our starting line, not one of the greatest upheavals—not National Socialism, not the Chinese Revolution not decolonization, not feminism, to list just three—has been due mainly, let alone exclusively, to technological change. As my teacher, Jacob Talmon, used to say: I know all that stuff about history, anonymous political, economic, social, cultural, and, yes, technological forces. But, absent Lenin, do you really think the Russian Revolution would have taken place?

Claim: AI and the ability to manipulate and spread information of every kind (spoken, written, in the form of images) will make it much harder, perhaps impossible, to distinguish truth from falsehood, honesty from fraud.

Rebuttal: True. But so did the invention, first of speech, then of writing (see on this Yuval Harari Sapiens, which helped inspire this post), then of print, then of newspapers, then of photography, then of film, then of the telegraph, then of electronic media such as radio and TV. Every one of them was open to abuse by means of adding material, subtracting material, and plain faking. And every one of them often has been and still is being so abused day by day. Long before the invention of “intellectual property” thieves and counterfeiters were forging ahead. Photoshop and Deepfake themselves are computer-generated. But what one computer can generate another can counter; at least in principle.

Claim: In the military field, AI will help make war much more deadly and much more destructive.

Rebuttal: The same was said and written about previous inventions such as the machine gun, the aircraft, and the submarine. Not to mention dynamite which its inventor, Alred Nobel (yes, he of the Prize) hoped would be so deadly as to cause war to be abolished). In fact, though, it is not technology alone but politics, economics and various social factors—above all, the willingness of individuals and groups to fight and, if necessary, die—that will govern the deadliness and destructiveness of future war, just as they have done in the past. Caesar’s conquest of Gaul is said to have caused the death of a million people. Tamerlane in the fourteenth century wiped out perhaps 17 million. And even that is easily overshadowed by the number Genghis Khan, using nothing more sophisticated than captured mechanical siege engines, killed a century and a half earlier. Here I want to repeat a statement I have often made before: namely that the one invention that has really changed war, and will continue to make its impact felt in all future wars to come, is nukes.

Claim: AI will put an end to art and artists.

Rebuttal: A little more than a century ago, the same was said and written about film bringing about the end of the theater. Starting almost two centuries ago, the same was said and written about photography sounding the death-knell of painting. Need I add that photography and film, far from causing art to disappear, have themselves turned into a very important art forms?

Claim: “AI-powered image and video analysis tools are used for a wide range of social impact applications. They can detect anomalies in medical scans, assess crop health for farmers, and even identify endangered species from camera trap images, aiding conservation efforts.”

Rebuttal: as if all the things, and any number of others like them, were not done long before anyone heard of AI.

Claim: AI has changed/will change “everything.”

Rebuttal: Back in the 1990s, exactly the same things were said of .com. Yet looking back, it would seem that the things that did not change (the impact of poverty, disease, natural disasters, war, old age and death e.g, as well as that of love, friendship, solidarity, patriotism, etc.) are just as important as those that did.

If not more so.

A Madcap World

Stanley, The Promethean, Kindle, 2017.

A madcap world filled with madcap characters. A godforsaken English village called Tussock’s Bottom where the favorite drink is a kind of beer affectionately known as Old Stinker. A Christminster (i.e Oxford) Don named Habbakuk McWrath who is an expert on Extreme Celtic Studies and likes nothing better than a good old fashioned brawl of the kind his wild ancestors used to engage in. A British prime minister named Terry Carter, leader of the Conservative Democrats (ConDems, for short). Modelled on a real former prime minister whose name I shall not spell out, he is “a consummate liar and cheap publicity seeker, cravenly addicted to the latest media opinion polls and the number of his ‘likes’ on Facebook and Twitter, perpetually grinning, and with no sincere beliefs about anything except his own importance.” A highly polished French intellectual named Marcel Choux (cabbage) who has declared war on truth—yes, truth—as an instrument of racism, repression, discrimination and a whole series of similar bad things. And who, instead of being sent to the loony bin, is worshipped by the students and faculty of the London School of Politics (aka of Economics and Political Science) who have invited him to receive a prize and give a lecture.

Into this world steps Harry Hockenheimer, a young American billionaire who made his fortune by helping women satisfy their vanity when looking into a mirror. Now 39 years old, happily married to Lulu-Belle who does not make too many demands on him, he has reached the point where he is simply bored with life. Casting around for something significant to do, he hits on the idea of building a robot sufficiently human-like in terms of appearance, behavior and mental abilities to act as a factotum to anyone with the money to buy or rent it. To keep things secret, the decision is made to design and produce the prototype robot not in Hockenheimer’s native California but in Tussock’s Bottom. There he has his assistant set up a high-tech household where everything, from shopping through cleaning to regulating the temperature, is done by computers. At one point Hockenheimer returns to his home away from home only to find that mice, by gnawing on the cables and defecating on them, have turned it in a complete, dirty and smelly, mess. That, however, is a minor glitch soon corrected by a very willing elderly lady armed with a whirlwind of dusters, mops, polishers, sprays, buckets, and  similar kinds of mundane, but highly effective, equipment

Approaching completion the robot is christened Frank Meadows, as inoffensive a name as they come. He also goes through a number of tests that highlight his phenomenal memory as well as his ability to remember, articulate and do anything a human can, only much better and much faster. Meadows starts his career by visiting the local pub where he plays darts with his fellow visitors and wins the game hands down. Next he deals with an obnoxious policeman who, having attacked him, ends up in a muddy ditch and is subsequently fired from the force. He takes part in a TV show called A Laugh a Minute whose host, Jason Blunt, “a flabby stupid, greedy, and arrogant exhibitionist with a chip on both shoulders” ends up by physically attacking Frank and, for his pains, is dumped back into his seat like a sack of potatoes. He spends many hours listening to Hockenheimer who does his best to explain to him the way the world works. He… but I will not spoil the story by telling you how it ends. Nor will I let you have the author’s real name and identity; that is something you will have to find out for yourself.

Amidst all this political correctness, inclusionism, identitarianism, and any number of similar modern ideas are mercilessly exposed not just for the nonsense they are but for the way their exponents bully anyone who does not join them. All to the accompaniment of jokes, puns, wordplay and double entendres such as only Brits seem able to come up with. To be sure, Stanley is no Jane Austin and does not even pretend to trace the development of character the way great novelists do. However, almost any page of this book you may pick up will either make you helpless with laughter or, at the very least, bring an ironic smile to your face. Get it and spend a couple of hours—it is not very long—reading it from cover to cover. I promise you you will not be disappointed.

It Will Survive AI Too

 

Over the last few months, the media have been positively bristling with so much hype about the coming AI Revolution as to make the heads of hundreds of millions spin. How it will completely upset the way things are produced and services, rendered. How it will increase productivity by anything between 40 and 1,200 percent (depending on whom you believe). How, it is “more powerful than Ukraine and Taiwan.” How it will upset the existing international order, assisting the US and India at the expense of China and Europe (Russia, apparently, does not count) and possibly saving the first-listed from what many people see as its imminent decline.

 

I am seventy-seven years old. I have never commanded a military formation, nor run a corporation, nor done research either in the natural sciences or in computing, AI included. In other words, my understanding of the issue is, let’s be charitable, limited. On the other hand, I feel that what little understanding I have of the way history works—an understanding I’ve been trying to acquire since I was ten years old—gives me some kind of handle when thinking about it. By way of a peg on which to hang my thought, I have chosen an article on the subject: S, Sharma, “8 Ways Artificial Intelligence (AI) Can Help You Improve Productivity.” The AI Journal, 9/1/2023, at https://aijourn.com/8-ways-artificial-intelligence-ai-can-help-you-improve-productivity/

So here comes Mr. Sharma’s list of some of the things AI is going to do.

1. “Forecast Demand Accurately.” As any student of economics can tell you, demand—here referring exclusively to commercial demand, not to every other kind—depends on many different factors. Technological developments, as when new gadgets, e.g steam engines or automobiles or computers, appear on the market. Prices, especially relative ones, that go either up or down. Macroeconomic developments. Changing circumstances, e.g droughts, global warming, or the emergence of epidemics such as COVID. Changing tastes, habits and ideas which cause the public to prefer one product over another. The discovery of new resources or the drying-up of old ones. The outbreak of war. Trends, luck, and fate (whatever that is). Some of these factors are foreseeable to some extent, others not. Some can be quantified and made computable, others not. All interact, forming a tapestry infinitely more complicated than anything that ever came off a loom. As result, for every correct vision there are ten incorrect ones. Briefly, the future is as much of a mystery today as it was 2,000 years ago when the Roman orator and lawyer Marcus Tullius Cicero discussed the problem with his brother Titus. Ironically Microsoft Bing, asked what the future would be like, first told me there were too many different scenarios to count and then invited me to submit my own.

2. “Automatic Text Creation.” This is already happening. Indeed multinational companies, in need of multilingual catalogues to sell their wares in different countries, have been using something like it for years. However, as anyone with experience in the matter can tell you, the outcome is likely to be both error-prone and moronic. Error-prone, because the machine has no idea of what the words it manipulates actually mean and is therefore liable to come up with all kinds of absurdities. Moronic because, not having an idea, it strings them together on the basis of the order in which they have been arranged before. One is reminded of the story in Gulliver’s Travels where “the most ignorant person, at a reasonable charge, and with a little bodily labor, might write books in philosophy, poetry, politics, laws, mathematics, and theology, without the least assistance from genius or study” by using a special table on which all the verbs, all the adjectives and all the verbs in all languages, written on rotating cubes, can be arranged into a text simply by turning a crank. And without in the least understanding what she (or he) has done, of course.

3. “Predict Maintenance.” With or without the aid of AI, predictions of maintenance requirements have been made for ages. At best, AI will make the process faster and more reliable. But I cannot help wondering whether AI will be able to come up with new and improved maintenance philosophies. If I am wrong, please let me know.

4. “Easy Data Extraction and Review.” This, too, has been done for ages. In fact every single ancient writing system known to us was designed specifically for that exact purpose. So in China, so in Mesopotamia, so in Egypt, so in the Aegean (Linear A), and so among the Inca (quipu). But whether General Motors e.g is better run today than it was at its heyday in the 1920s, long before computers and the current AI revolution, is doubtful. Why? Because demand adapts itself to the available supply. The more AI we have at our disposal the more complex and the more numerous the problems it is asked to resolve.

5. “Seamless interaction.” Attempts to achieve it, some successful, some not, have been going on for ages. The problem? Either changing external circumstances or the kind of human caprice known, euphemistically, as “the free will.” Or, not seldom, some more or less weird combination of both.

6. “Improve manufacturing processes.” This may be true, but it is certainly not new. As long as humans have produced anything—meaning, for the last few hundreds of thousands of years—they have also been trying to improve the production process. It worked: now slowly, now very fast.

7. “Automatic hiring.” AI may help employers go through the hiring process faster. However, often it does so only by making life for would-be employees much harder, as by having them fill far more questionnaires. All without any guarantee that things will work out better than they have done in the past or are doing now. For example, can anyone seriously argue that Julius Caesar was not as good in choosing his subordinates as George Marshal was?

8. “Social Commerce & Livestream Shopping.” This has been taking place at least since the first known cities, emerging some 5,000 years ago, set apart special spaces for—yes, you guessed it—“social commerce.” Indeed “commerce” itself stems from the Latin word cum, meaning “together.” AI may facilitated and extended it, but without changing anything really important.

Conclusion. In the words of Ecclesiastics “the thing that hath been, it is that which shall be; and that which is done is that which shall be done: and there is no new thing under the sun.” Mankind has survived the ice age when, if scientists got it right, at one point there were only 3,000-10,000 individuals left. It will survive AI too.

Tagged

Where Does All That Put AI?

At some time between 2,000,000 and 500,000 BCE, men (and, lest we forget, women-lesbians-gays-transgenders-queer people-bisexuals-asexuals) exchanged animals’ cries/roars//barks/howls etc. for true speech with all its infinite nuances and complexities. Doing so, they became capable of much better interspecies cooperation and changed the course of history. Forever.

At some time between 500,000 and 200,000 BCE they learnt how to control and use fire. Doing so they greatly expanded the range of possible habitats and edible foods and changed the course of history. Forever.

At some time between 500,000 and 100,000 BCE they invented clothing, enabling them to spread into a great many environments that had previously been uninhabitable and stay in them throughout the year, regardless of season or weather. Doing so they changed the course of history. Forever.

At some time around 10,000 BCE they invented agriculture, enabling much larger numbers of people to live together and be fed. Doing so, they changed the course of history. Forever.

It is said that, at some time around 10,000 BCE, they invented war, meaning the use of coordinated violence by the members of one group of people against those of another. Doing so, they changed the course of history. Forever.

At some time around 4,000 BCE they invented the wheel, thereby enabling not merely people but much larger and heavier loads to be moved much farther, faster, and at lower cost. Doing so, they once again changed the course of history. Forever.

Around 3,500 BCE they invented writing, thus enabling much larger numbers of people to form polities, cooperate, and undertake tasks far greater than anything their predecessors could. This invention, too, changed history. Forever.

Around 2,500 BCE they learnt to work iron, thus laying the foundation of much subsequent technology and changing the course of history. Forever.  

And so on, and so on. Leading through the invention or discovery of bow and arrow (ca. 70,000 BCE), weaving (in eastern Anatolia, ca. 7,000 BCE), astronomy (in Egypt and Mesopotamia, ca. 4.000 CE), high-sea navigation (4,000-2,000 BCE), gunpowder (in China, ca. 1,000 CE), print (1450), modern observation-experiment-mathematics-based science (1650), the steam engine (1729), the railway (1825), the telegraph (1830), the dynamo (1831), the electric motor and internal combustion engines (1860 and 1873 respectively), the telephone (1875), radio (1895), quantum mechanics and relativity (1900 and 1905 respectively), heavier than air flying machines (1903), penicillin (1928), TV (1936), electronic computers (1948), and the structure of DNA (1953), to mention but a few out of tens if not hundreds of thousands.  Starting at least as far back as when the Emperor Vespasian had an inventor executed lest his invention, a new kind of crane, should rob many citizens of their livelihood, many of them were initially seen as absolutely catastrophic. The introduction of gunpowder, print, and mechanical weaving all brought about similar reactions (including some that were violent), by various groups of people. Ditto the advent of nuclear weapons (1945) which, many authors, both military and others, keep telling us will inevitably lead to Armageddon and must therefore be combatted by every means.

Fast forward to the present. Writing for the Economist Yuval Harari has put himself at the head of entire herds of pundits. His argument? Artificial intelligence, by learning to use language in ways that are sometimes almost indistinguishable from those hitherto reserved for humans, is on the threshold of doing so again. And, as it does so, may take the rudder out of our hands and lead us into a new catastrophe much worse than all previous ones.

Far be it from me to dispute the significance of these and any other number of ground-breaking inventions and discoveries. Had it not been for them, then presumably we would still have been living on the African savanna in nomadic or semi-nomadic groups of between 50 and 150 individuals. Gathering fruits, tubers and berries; hunting birds and small animals; trying to avoid being eaten by crocodiles as well as any number of big cats; and watching every second child die before it could reach puberty. Or else, going back still further in time, crying out to each other while swinging from tree-branch to tree-branch as some of our ape-like ancestors are believed to have done.

But consider.

First, suppose it is true that each of these and other inventions and discoveries has pushed history onto a radically different “new course.” In that case, how come that, after thousands upon thousands of years of innovation, so many of our earliest traits, both psychological and social, both individual and collective, are still with us? Including our need for company; our craving, partly successful but partly not, to try and understand how things work; our ability to recognize the comic and laugh; our enjoyment of play; our capacity for extreme cruelty; our ability to create artefacts of every kind; our attraction to beauty and to music; our frequent anxiety about what the future may bring; and as many others as you may care to list.

Second, suppose it is true that history’s course has undergone any number of truly fundamental changes. In that case, how come that some ancient items—e.g. Egyptian wall-paintings, the game of Go, the Bible, Greek art, the Platonic Dialogues, Confucius’ Analects, Laotzu’s Book of Tao, Euclid’s Principles of Geometry, Shakespeare’s plays, the works of Rembrandt and Vermeer, to mention but a few, are not only with us still but appeal to us just as much as they did to our ancestors?

In other words: Isn’t history a fabric made up of both the warp–the threads that run lengthwise — and the woof — the threads that run across? And isn’t it true that, without the both of them, it could not exist?

There have indeed been many changes: but have they really been as fundamental, let alone as disastrous, as the drumbeat of so many pundits suggests? If so, how did we increase from perhaps as few as 600 breeding individuals during the last ice age to 8 billion people today?

Where does all that put AI?

Chat GPT

Years ago when I was still at the Hebrew University in Jerusalem, I used to teach a course named Early Modern Political Thought. Judging by the feedback mechanism’s results, it was the most successful course I ever gave—it regularly got 19 out of 20 possible point. So I repeated it many times. Focusing on original sources, we read parts of the works of Desiderius Erasmus, Nicolò Machiavelli, Emperor Charles V (the 1543 testament), Jean Bodin, Thomas Hobbes, Robert Filmer, John Locke, and, when there were still some meetings left, Charles de Montesquieu.

Like most freshmen in most countries the students, most of them 21-22 years old and fresh out of the military, did not know anything about anything. To help them along while at the same time avoiding a waste of time, I made them do two things. First, prior to each meeting they were to read the assigned chapters. Second, each time we finished with one author and moved to the next one they were to present me with a short essay—about 1,000 words—explaining who he was, what he had written, and so on. I did not ask them for footnotes, but I did ask that they list the books or articles they had consulted. It worked; week by week, I would collect their essays, read them quite carefully, and annotate them; week by week, I saw their work improving.  They in turn were very grateful, even to the point of telling me that my course was the best of the lot.

Since then much has changed. Probably the most important change has been the rise of political correctness, a monster if ever one there was which has made teaching literature, history and social science almost impossible. That, however, is not the development I want to talk about here. What I do want to talk about is the following text. It was composed, not by a human being but by an AI mechanism and available at ChatGPT. Having duly checked in, I asked it to provide me with a short essay about Machiavelli’s life and work. Soon it came up with the following text:

Niccolò Machiavelli is considered one of the most influential political philosophers in history, with his ideas and theories influencing political leaders for centuries. Born in Florence, Italy in 1469, Machiavelli was a diplomat, political philosopher, and writer during the Renaissance period.

One of Machiavelli’s most well-known works is “The Prince”, a political treatise that explores the concept of power and how leaders can maintain it. In this work, Machiavelli argues that rulers should be willing to do whatever is necessary to maintain their power, even if it means acting in ways that are considered immoral or unethical. This idea became known as “the ends justify the means”, and it has been widely debated and discussed by political philosophers and leaders throughout history.

One of the key themes in “The Prince” is the idea of “raison d’etat”, or the belief that the needs and interests of the state should always come before the needs and interests of the individual. According to Machiavelli, a leader should prioritize the well-being and stability of the state above all else, even if it means disregarding the rights and freedoms of the citizens. This idea is reflected in Machiavelli’s famous quote, “The first rule of a prince is to be able to be not good, if that is required to maintain his rule.”

Another key concept in “The Prince” is the idea of “fortuna”, or the role that luck and chance play in the success of a leader. According to Machiavelli, a leader should be aware of the role that luck and chance play in their success, and should be prepared to adapt to changing circumstances. However, he also believed that a leader could influence their own luck by being proactive and making decisive actions.

In addition to “The Prince”, Machiavelli also wrote a number of other influential works, including “The Art of War” and “The Discourses”. In these works, he explored themes such as the importance of military strategy and the role of citizenship in a republic.

One of the main criticisms of Machiavelli’s ideas is that they promote immoral and unethical behavior. Some have argued that his emphasis on the needs of the state and the willingness to use any means necessary to achieve power can justify authoritarianism and the suppression of individual rights. Others have argued that his ideas are necessary in order to maintain stability and protect the state.

Despite these criticisms, Machiavelli’s ideas continue to be widely studied and debated by political philosophers and leaders today. His work has had a significant influence on the development of modern political thought and continues to be a major influence on political leaders around the world.

In conclusion, Niccolò Machiavelli was a political philosopher whose ideas continue to be widely studied and debated today. His most famous work, “The Prince”, explores the concept of power and how leaders can maintain it, and his ideas have had a significant influence on the development of modern political thought. While some have criticized his ideas for promoting unethical and immoral behavior, others argue that his ideas are necessary in order to maintain stability and protect the state. Regardless of one’s perspective, it is clear that Machiavelli’s work has had a lasting impact on political thought and leadership.

Not half bad, don’t you agree? A bit pompous, yes. A bit bland. As if someone had decided to leave out anything a little bit interesting, let alone provocative or innovative; which, judging by what has been published about the way the program was created, is just what “someone” had done. But presumably good enough to pass the Turing test. Logically put together, without many of the things I used to warn my student against for years on end; such as leaps of thought, repetitions, or obvious semantic and grammatical errors. Better stuff than many students can produce on their own. Certainly not without having devoted some thought to what to put in (and in what order), what to leave out, why, and how; and so on, and so on.

Which, after all, is precisely what the exercise I made them do was all about and why they enjoyed doing it as much as they did.

AI

J. Tangredi and G. Galdorisi, eds., AI at War: How Big Data, Artificial Intelligence, and Machine Learning Are Changing Naval Warfare, Annapolis, Md, Naval Institute Press, 2021.

The book, which I got in hard copy from a friend, was written by a team of experts, all of whom have years of experience with computers, cyberwar, AI, the US Navy, or all four of those. Such being the case, I was hardly surprised to find it overflowing with praise (interspersed with a few warnings, what’s true is true) for everything that has to do with computers. What huge memories they have, incomparably larger and more easily accessible than those of the most capable humans. How fast they can process information and, thanks to the data links that connect them, pass it to the ends of the universe (and perhaps beyond, but let’s not enter into that here). How sophisticated their programs, specifically including AI, have become, enabling them to “see” a thousand connections that would probably have escaped humans even if they spent a thousand years looking for them. How modern warfare (and a thousand other things) would be inconceivable without them.

How dangerous it would be to allow America’s rivals to leapfrog it in this critically important field. Above all, what marvelous things computers and AI may still be expected to do in the future. How, though unable to replace humans, they can greatly enhance their capabilities. Provided some remaining fundamental problems (such as the difficulty they have in adapting to change and the vast surplus of information they generate) are solved, of course; and provided the necessary funding is made available. All this, against a background of naval, and by no means only naval, warfare that is becoming steadily faster and more complex.

I would be the last person in the world to even try and dispute all this. After all, who can argue with sentences such as the following? “For this modest shift in force design to yield the most benefit, DoD needs to co-develop C2 processes that can operate a more disaggregated force and to pursue a new innovation strategy that focuses less on gaps in the ability of today’s force to operate as desired and more on how the future could perform better with new capabilities that may create novel ways of operating (Harrison Schramm and Bryan Clark, p. 240).” “An important benefit of using machine control is that it enables C2 architectures to adapt to communications availability, rather than DoD having to invest in robust communication infrastructure to support a ‘one sizer fits all’ C2 hierarchy” (same authors, p. 241). And who cares that “the term ‘all domain’ has started to replace the US Army ‘multiple-domain warfare’ term. First use appears to be Jim Garamose, “US military Must Develop AI-domain Defenses, Mattis, Dunford say,’ US Department of Defense, April 132, 2018, htppsw//www.defense.gov/Newsroom./News/Article/Article1493209-us-military-must-develop all-domain-defenses-mattis-dunford-say” (Adam M. Aycock and William G. Glenney, IV p. 283).
Kamagra is the best cipla tadalafil raindogscine.com online medication that works in the similar manner. For stunning buy viagra online result whole single Caverta tablet must be sipped once in a day. The increase of potency for 36 hours by the use of cialis generic usa . You would need doing a little Internet research generic levitra 5mg to find right ED medicine.
Not I. Nor, I suspect, anyone who is not a member, bona fide or otherwise, of the community which specializes in such things. All this might have convinced me to snap to attention and salute in face of the avalanche of expertise –the “select bibliography” alone amounts to forty pages—the authors have hurled at me. If I did not do so, though, then that was partly because of the following incident. I got my first inkling that the book, which had been sent to me by snail mail, had arrived here in my neighborhood when I found a computer-printed note in my mailbox saying that I should come and collect it from the nearby post office. However, I knew I could not do so immediately; here in Israel it is customary for the Postal Service to give you your letters and parcels not on the day you are notified but on the next one. However, this was a Thursday. Since the Israeli weekend starts on Friday and lasts through Saturday, doing so had to wait until Sunday. Sunday morning I went to the office, only to learn that, to send a letter or parcel, you now have to make an appointment in advance (by handy and application, of course). As a result a number of people, mostly elderly ones like myself, were milling about looking embarrassed, not knowing what to do and how to do it. A few, asking the overburdened staff for help but not getting it, were close to tears.

Fortunately I was there to receive an item, not to send it. This time there was no need for a handy. I handed in my note, typed my ID number into a little gadget they keep for the purpose, and prepared to sign my name onto the screen when I realized that the attached electronic pencil was missing; perhaps someone, overtaken by computer rage, had deliberately torn it away. So instead I used my finger—not to make a print, which the machine was unable to “understand,” but simply to leave some kind of mark—an X, as it happened. Much like the ones illiterates of all ages have always used and still use.

I suppose I was lucky. They let me have the book, which as is almost always the case with the Naval Institute turned out to be not only crammed with information but well and solidly produced. Not having to go home and visit the post office again—good!

In and out of the Start Up Nation, my experience may be unique. Or is it?