Writing

The general form of academic writing

Academic papers of many forms still follow the sequence shown on the left. Different publication types expand, combine, omit or contract sections, and name them differently, but the general logical structure remains. In the figure below, academic writing idealizes the research journey as it moves between abstract theory and practical evidence. As will be seen, scientific research is powered by abstract theory but grounded in the practical evidence it finds.It is sad but true that no-one cares about your research trials and tribulations. So when writing up your results, omit dead-ends and false starts. A paper is not a record of the actual research journey but describes an ideal journey where everything was logical. This is not because research is logical, as it isn’t, but to make it easier for others to read. So the figure above describes academic writing as an ideal journey, not the actual one. 

The names above may be used but not always – check the practice of your field. And not all fields have all parts, e.g. mathematics has no Method section. A research proposal for example stops at the method and has no results or discussion. The sections vary but the logical order stays the same. Writing up research in as follows makes it easier for others to read as they know where to look for what:

  • Method. How evidence was gathered.
  • Results. What the evidence showed, including findings.
  • Discussion. Discuss the findings in broad terms.
  • References. A list of the sources used in the paper.

The above are not headings, so the discussion part does not need to be called “Discussion” nor does the introduction need to be called “Introduction”. Choose headings that suit your paper and style.

Science  Writing  Review  Glossary Checklist  Next

Advisor

Advisor. Gathering data to answer questions sounds simple in theory but almost no-one does it in practice without help. Every research student has an advisor to guide and monitor progress. Is the advisor a guru whose word is gospel, a captain who must be obeyed, a sponsor to be kept happy or a guide who can be ignored? The answer depends on your role in the research.

Research roles. Research as a complex journey is often undertaken in a group, where who does what is often broadly negotiated as follows:

  • Pilot. Decides the research direction and has the final say in all decisions (PhD).
  • Co-pilot. Advises the pilot and takes charge for part of the journey (Masters).
  • Assistant. Helps as directed and required (Postgraduate project).

There are many exceptions, but in general PhD students take on the pilot role for their last research “flight”, after which they can go alone. For a Masters the advisor often gives the research question, so they are a co-pilot who may help out by for example gathering or analyzing data. Finally, in a post-graduate research project, one is usually just an unpaid research assistant but you get to see what is involved. If a paper results, the pilot is usually the first author, then co-pilot(s), with assistants optional. Research assistants paid to work as directed aren’t usually listed as authors but acknowledged instead.

Role conflicts. It is important student and advisor agree on roles from the beginning or problems can arise. For example, I did my PhD on information systems in the nineties, just as the Internet was starting up, after retiring from the army. My advisor wanted an assistant for his work co-located computing but I saw the future as distributed computing and wanted to pilot my own research. Eventually I was told to get with his program or leave, so I left and got another advisor. Of course when a PhD student asks “What do you want me to research?” they want to be an assistant, although that is not what a PhD is about. Even worse, students who say “I’ll do whatever you want for a PhD” are research prostitutes, while advisors who say “Do what I want or no degree” are corrupt. The research journey is not a business transaction.

The next two chapters are about what all academics do, namely write and review papers.

Science  Writing  Review  Glossary Checklist  Next

Pitfalls

 

Research Pitfalls. (picture of somewhere in NZ)

Less than 50% of PhD studies complete often due to difficulties encountered along the way. Common pitfalls of the research journey include:

  • The plains of irrelevance. Researchers who go no further than the plains of irrelevance gather common flowers and muse upon them while sitting in the sunshine on a grassy knoll of no consequence. This happens if the topic chosen for the research is trivial.
  • The mountains of thought. A good researcher must ascend a mountain of thought to view the terrain to be explore. Without good mental tools, one can fall into contradictions or get stuck on a ledge that points where you don’t want to go. The rarefied air can make a climber dizzy or nauseous and clouds can obscure the view. This happens if the concepts used by the research are inappropriate, unclear, contradictory, confusing or incomplete.
  • The forest of complexity. When one descends to pick a research site it is easy to get lost in a forest of facts that indeed has no end. One can easily set up camp in the wrong place or wander around in circles and not find a suitable spot to settle. This happens when a researcher is unable to find a specific research question upon which to focus.
  • The island of fantasy. This island is well known as its mushrooms give a state called “bias” where whatever one imagines seems true. Those who gather here return with tall tales based on imagination not reality. This happens when the evidence gathered is tainted by one’s beliefs and expectations rather than being from a source independent of oneself.
  • The swamp of detail. A researcher weighed down with too much data can be sucked down and disappear in the swamp of detail. One has to distill what was found by analysis before returning, to turn a ton of data into a cup of results. This problem happens when a researcher just collects “facts but does not analyze them.
  • The harbor of influence. The harbor of influence lets researchers share their findings with others for mutual benefit. Unless one arrives here, no-one else even knows the journey has been made. Those who don’t make conclusions that relate to others fail to establish the value of their journey.

Research is complex, so for your first journey take along an advisor who has explored before.

Science  Writing  Review  Glossary Checklist  Next

The Vehicle

Different sciences use different research method “vehicles”

Different science journeys have different vehicles. Just as explorers use different vehicles for different terrains, so scientists use different research method vehicles for different knowledge terrains. This is why different fields define science differently, but science is defined by the journey itself not the vehicle used. For any field to say “Only we do real science” based on methods is wrong. Science as a journey reveals what all science has in common.

A mathematical equation

Natural sciences like physics for example can gather physical evidence to support laws based on universal equations, e.g. E=mc2 is a statement connecting energy to matter that works for all cases, whether for photons or gases. The domain of physics is the physical world and it aims to make conclusions about that world. In contrast a natural science like geography, the study of earth formations, focuses less on equation-based laws than on geographical concepts that explain changes in earth data, often analyzed by statistics. To say that all science must have equations is like saying every vehicle must mount a big gun.

Indo-European languages tree

Formal sciences like mathematics in contrast operate in symbolic domains with no necessary connection to the physical world at all. Yet they still have a topic, define concepts, ask a question and use deductive methods to gather evidence to reach conclusions. Likewise, linguistics doesn’t focus on the natural world but has a theory base that can be tested against language outputs to reach general conclusions, such as that a common Proto-Indo-European (PIE) language evolved into languages as diverse as Spanish, English, Hindustani, Portuguese, Bengali, Russian, Punjabi, German, Persian, French, Italian, Albanian, Kurdish, Nepali, Ukrainian and Welsh.To say that all science must study the natural world is to imply that all knowledge is about physical things, which is not true.

Taxonomy classifies life

Some sciences are mainly descriptive, like taxonomy, the classification of animals based on shared characteristics. If science was just a “set of facts” biologists would just gather facts about every animal, add them to a big database, and leave it at that. Instead each animal is classified into a kingdom, phylum, class, order, family, genus and species, to better “predict” new forms. There are no equations, experiments or universal laws but knowledge based on evidence is still science. To say that all science must carry out experiments is to exclude sciences based on observation rather than the manipulation of what is observed.

Pseudoscience. All definitions of science distinguish it from pseudoscience, that which seems to be science but is not, e.g. alchemy, the study of how matter can transform say lead into gold. Alchemists observed and did experiments but the goal was not common knowledge as each kept their findings secret, nor was the belief that all matter came from the four “elements” of earth, air, water and fire ever questioned. Alchemy was not science but over time it became the science of chemistry, which found that two airy gases, hydrogen and oxygen, can combine into water. Chemistry was then able to discover the hundred plus elements of the periodic table. What made alchemy not science was not that it was wrong, as it obtained many observable results, but how it went about finding things out. In particular, it began by knowing the elements of matter instead of discovering them.

Falsifiability. Popper proposed the key feature of scientific knowledge to be falsifiability, that evidence can prove a statement wrong at any time. Thus alchemy was not a science as one cannot find matter that does not have at least at least one of the properties of earth, air, water or fire. Then Kuhn pointed out that a failure to predict rarely spells the downfall of a theory, as when that happens proponents just change the theory to fit, e.g. finding a new animal that doesnt fit the current taxonomy just results in a new class to accommodate it. So while falsifiability is desirable, it is not essential. What is essential however is that scientists question their assumptions based on evidence. Modern thinkers like Lakatos see science as comparing alternative theories based on their power to consistently predict new findings without ad hoc adjustments. So science chooses theories that predict more and involve less ad hoc costs. For example, big bang theory, that the our universe began at a moment in space and time about 13.8 billion years ago, is not “proven” but it is a better explanation of the facts than the alternative steady state theory – that the physical universe always existed. Every scientific theory is only “the truth” until a better one comes along.

Theory power. If science is the ability to answer questions, the power of a theory is how much it answers divided by the number of parameter variables used to do so. For example, in the second century, Ptolemy’s Almagest let people predict the movements of the stars for the first time, based on the idea that heavenly bodies, being heavenly, moved in perfect circles or circles within circles (epicycles) around the Earth. It wasn’t true but the equations worked because Ptolemy’s followers amended the model after each new star was found, by changing the free parameters of epicycle, eccentric and equant to fit the facts. This backward thinking was only abandoned when Copernicus, Kepler, Galileo and Newton developed a more powerful model to replace it. In science, theories evolve based on their power not on their “truth”. The next section looks at the pitfalls of science.

Science  Writing  Review  Glossary Checklist  Next

The Journey

The research journey seems simple enoughask a question then gather evidence to find an answer. One chooses a topic, clarifies the concepts involved to form a question, then gathers evidence and analyzes it to form conclusions. This gives these steps: 1. Pick a topic, 2a. Review concepts, 2b. Ask question, 3. Gather evidence, 4. Analyze it, and 5. Form conclusions, as in the figure above. Yet it turns out that doing research is not simple at all, for two main reasons:

  • Knowledge is holistic.
  • Research is not linear.

Knowledge is holistic. Holistic systems are defined not just by their parts but also the interactions between those parts. Knowledge is holistic because it does not add simply, as a grain adds to a pile of sand. A pile of sand as the aggregate of its grains means that adding one more only changes it slightly. In contrast, a mental structure as a set of interacting ideas means that adding one more to can change the whole meaning, e.g. to add one word to the sentence “I love you dearly as my lifeto giveI love you dearly as my life, notchanges the meaning entirely. The figure shows this by putting concepts in the realm of theory and evidence in the realm of practice. The difference between theory and practice is critical to understanding how research turns practical evidence into theory knowledge. Knowledge emerges from practice as information emerges from physical events, as a higher way to view reality. This is why creating new knowledge is not just a matter of adding new “facts” to an existing “pile”. If science was just “filling a knowledge bucket”, surely we would have done it by now? The “bucket of knowledge” does not fill because every question answered creates more. When Einstein discovered that time varies with the observer he changed all physics, as did quantum theory’s discovery that all physical events are probabilistic. We know more than before but are no closer to a theory of everything, as every answer creates more questions. As what we know expands, what we don’t know expands more! A true scientist knows that they don’t know.

Research is not linear. When you ask scientists how they did new research, the answer is nothing like the apparent logic of the figure above. Research is more like stumbling in the dark than taking an express train, so the arrows in the figure go back as well as forward. One might head off in one direction only to find a dead-end and have to go back and start again. Finding the right question is not easy, so false starts are the norm not the exception. It is like drilling for oil, where one might drill in a likely place to find nothing then drill in an unlikely place and find it. Research is discovered not crafted, so the journey is often convoluted not straight. Of all the people in the world, only Einstein asked what happens when an observer rides on a beam of light and only he found evidence to conclude that time must stop. And it took a Darwin to ask why animal species were just so and gather evidence to conclude they evolved by observing species on different islands. Such questions had not been asked before because everyone “knew” what time was and everyone “knew” that life was always as it is now.

Science is not a set of answers but a way of asking questions, a way of gaining knowledge that evolved over hundreds of years based on the premise that we don’t know. This may seem strange, as the aim is to know things, but it is not knowing that creates knowing. We see science as modern, compared to say the ancient wisdom of the east, but the founders of Buddhism, Taoism, Confucianism and Science all wandered the earth at about the same time (600BC). When you do research, you are following a wisdom tradition as ancient as any, for it is as hard to question and validate rightly as it is to think and act rightly. In this tradition, everyone “knows” but a scientist is one who knows that they do not know. The next section considers the research method as the vehicle of the scientific journey.

Science  Writing  Review  Glossary Checklist  Next

Science

I came, I saw, I understood    (Brian Whitworth, 2018)

What is science? To think that science must be based on empirical observations leads to the view that mathematics is not a science as it is not based on observables! Others define it as the study of natural thingsbut then is astrology a science because it studies how stars affect people, both natural things? Or is quantum theory that describes non-physical quantum events not science? Some call it thestudy (of) a body of facts or truths systematically arrangedbut then what religion doesn’t claim to systematically arrange what is true? And are computer simulations not science if they don’t claim to represent facts? To say the purpose of science is to discover general laws diminishes sciences like history, as when Feynman said “”Social science is an example of a science which is not a science… They follow the forms… but they don’t get any laws.” Others call it the systematic study of the structure and behaviour of the physical and natural world through observation and experiment” but this excludes cosmology, as who can “experiment” on stars? Definitions like that science is “a systematic and logical approach to discovering how things in the universe work means physical geography is not a science because it doesn’t discover how things “work”.

On the other hand, some are more general, calling science a “state of knowing: knowledge as distinguished from ignorance or misunderstanding”  so is a painter in a state of knowing how to paint a scientist, or a monk who knows himself? Even broader is science council view that science is the pursuit and application of knowledge and understanding of the natural and social world following a systematic methodology based on evidence”. By this definition, cooking is a science because it has systematic method and is based on evidence, even though it does not claim to be so.

It is all very confusing as by some definitions mathematics, which Gauss called the queen of the sciences, is not a science at all, while by others, cooking, which doesn’t claim to be science, is one. Isn’t it odd that scientists can’t agree what science is? The approach taken here is that science is an activity not a product, so it isn’t defined by its output, just as farming isn’t defined by the wheat, meat, wool, etc. that it produces. Science is no more a “body of knowledge” than farming is a “body of farm product”.

Science  Writing  Review  Glossary Checklist  Next

 

Chapter 7 Discussion Questions

Research questions from the list below and give your answer, with reasons and examples. If you are reading this chapter as part of a class – either at a university or in a commercial course – work in pairs then report back to the class.

1)   Define technology utopianism? What is the technology singularity? Give examples from movies. What is the big assumption behind the claim that computers will take over from people? Do you think computers will supersede people by 2025? Give reasons for your view.

2)   What technology advances did the last century expect by the year 2000? Which ones are still to come? What do people expect robots to be doing by 2050? What is realistic? How successful are mimic robots like Asimo and the Sony dog? How might social-technical design improve the Sony dog? How will robots evolve in the social-technical paradigm? Give promising examples, like elderly care.

3)   If super-computers with many processors can equal one human brain, are many brains together more intelligent than one? Look at the psychology of crowds to argue whether people are dumber or smarter together. Does adding more programmers to a project always finish it quicker? What, in general, affects whether a system of many parts performs as the sum of those parts? Is a super computer, with as many transistors as the brain has neurons, necessarily its processing equal? What is the difference?

4)   How do today’s super computers increase processing power? How many of the top ten processor cores use NVidia graphic board cores? How is this power utilized in real computing tasks? What decides whether many processing cores can operate in parallel on a problem? (computer science students only).

5)   Review the current state-of-the-art for fully automated vehicles, like car, plane, train, etc. How many cases are implemented? Compare fully automated with remotely piloted vehicles. When does full computer control work and when not? (Hint: consider how phone help systems evolved). When might full computer control of a vehicle be useful? Suggest how computer control of vehicles will evolve, with examples.

6)   What is the 99% barrier? Why is the last 1% of accuracy a problem for productive tasks? Give examples from language, logic, art, music, poetry, driving and another. How common are such tasks in the world? How does the brain handle them?

7)   What is a human savant? Give examples past and present. What tasks do savants do easily? Can they compete with modern computers? What tasks do savants find hard? What is the difference? Give examples of areas where savants need support. If computers are like savants, what support do they need?

8)   Find three examples of software that, like Mr. Clippy, thinks it knows best. Give examples of: 1. Acts without asking, 2. Nags, 3. Changes secretly, 4. Makes you work.

9)   Think of a relationship issue you would like advice on and form a clear question, like “Should I argue with my mother when she criticizes me?” Now ask the same question in these three ways:

a)   Go to your bedroom alone and put a photo of family member you like on a pillow. Ask the question out loud then imagine their response.

b)   Go to an online computer advisor like Cleverbot and do the same.

c)   Ring an anonymous help line and do the same.

Compare and contrast the results. Which was the most helpful?

10)   A rational way to decide is to list all the options, assess each one and pick the best. How many options are there for these contests: 1. Checkers, 2. Chess, 3. Civilization (a strategy game), 4. A MMORPG, 5. A debate. Which ones are computers good at? What do people do if they cannot calculate all the options? Can a program do this? How do online gamers rate human and AI opponents? Why? Will this always be so?

11)   What is the difference between syntax and semantics in language? Which are programs better at? How successful are text-to-speech systems like NaturalReader for some text like a poem? What is the computer doing? Now with a friend who knows another language, try language-to-language translators like Google Translate on the same poem. How good is the computer at semantic level transformations? Discuss using John Searle’s Chinese room thought experiment.

7.2 A Social-Technical Future

A social-technical future. What use is technical progress without social progress? If people fighting with spears are given machine guns, how is that better? And if technology gives us nuclear bombs in suitcases what then? Giving savages advanced weapons is a recipe for disaster. The biggest threat to humanity today is us. We are the most dangerous animal on the planet thanks to technology but society keeps us in check. Our progress from hunter-gatherers fighting each other to rich global traders required social evolution. It is civilization that lets millions of people share electricity, water, roads, hospitals and shops without killing each other. Science advances by sharing information, as does the Internet. To see only technical progress is to ignore social progress and to look to technology to solve social problems is to look in the wrong place:

A man was looking for his lost keys at night under a lamp post. When asked where he lost them, he replied “Over there in the bushes – but the light is better here.Trying to find the answer to social conflict in technology is just as foolish.

Like it or not, our future depends as much on social progress as it does on technical progress.

A comparison. Figure 7.6 compares a technical vision of the future and the social-technical vision, where the physical level (1) is the hardware we see and the information level (2) is software that emerges from it but follows different rules, as programming is not physics. The human level (3) is to information as software is to hardware, so again works by different rules. The social level (4) like the others emerges from the human level below. In Figure 7.6a, information technology ignores the human and social levels, believing that hardware and software alone will soon surpass them, but to give our future to algorithms is to abandon ourselves to the mindless and heartless. In contrast, a social-technical vision of Figure 7.6b sees the future as people harnessing technology, which like fire is a good servant but a bad master. Technology without people is less than useless, it is pointless, so the future is people and technology not technology alone

Figure 7.6: a. Technical future vs. b. Social-technical future

People and computers are better than computers alone. Why tie up twenty-million-dollar super-computers to do what brains with millions of years of real life beta-testing already do? Even if we made them work like the brain, say as neural nets, they could well inherit the same weaknesses. Unleashing the power of the Internet requires a more humble conception of what technology can do, to change the goal from trying to mimic people to helping them. The benefits are evident, as while robot cars are still on trial, reactive cruise control, range sensing and assisted parallel parking are here now. While computer surgery struggles, computer-supported remote surgery and computer-assisted surgery work now. While robots are learning to walk down steps, people with robot limbs can be better than able. While computer drones are a liability, remotely piloted drones are an asset. Computer generated graphics are good but state-of-the-art animations like Gollum in the Lord of the Rings combine human actors and computers. The TV show Robot Wars is people plus robots not robots alone. Even on an 8×8 board, centaur chess players plus computers beat computers alone:

“The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.” Kasparov, The Chess Master and the Computer, 2010

It is time to think man plus machine rather than man vs machine, that technology can advance with people rather than over them. All killer applications, from email to social media, succeeded by adding what people do best to what technology does best, letting computers handle the information and people manage meaning. The World Wide Web mainly made Internet links meaningful to people. The mobile phone was as much about making small screens usable as technical miniaturization. Google made search simple and used algorithms that made results relevant. Social media flourished by making human relations easier. That technology works better with us than without us is doesn’t downplay it, but says that to see the Internet only in technical terms is to underestimate it. To a savage, a mobile phone is a lump of metal not a world of information. Likewise today, the Internet is not a hardware platform but a social platform. And thanks to technology, social problems that have plagued society for generations are now hugely magnified.

 Technology magnifies social problems. Whereas once I could only offend the people around me, today thanks to Twitter I can offend thousands, as Roseanne Barr discovered. Ideologies of death like ISIS that were once confined to backward states, today find a global audience. Technology magnifies effects in the information age as it did in the industrial age. Last century, nuclear bombs magnified war but it was left to people to resist them. Clever people like von Neuman and Bertrand Russel advised the US to lay waste to Russia before it developed atom bombs, but ethics not game theory logic stopped the brutal killing of millions.  Today, the Internet magnifies social problems like hate and lying that, like war, have only hurt us in the past. Those who once waged physical wars today wage information wars. Again technology challenges humanity and again it has no answer. Only people can deal with social dilemmas.

Social problems require social solutions. The challenges facing the Internet today are social problems like hate and lying, where fake news is just a modern name for lies. Hate is wishing to harm to another and lying is saying what is not true. Both harm society because hate creates internal conflict and lies destroy the trust that lets people work together. The alternative is the golden rule, to help the social environment that supports us. So honesty and truth are “good” because they help society produce more, and racism, sexism, bullying, lying and stealing are “bad” because they harm society. Now that our social problems have migrated to the Internet, we need social solutions at the information level.

Hate is not entitled to a technology megaphone             A New Zealand cartoon by Guy Body

Understanding ethics. The Internet challenge is to understand what social means. If ethical people don’t get their ethics right they cant make the right choices, e.g. the strategy When they go low, we go high” didn’t work because its like saying “When they go to war, we go to peace.” That didn’t work for Hitler either. One must oppose evil not ignore it. In this view, social is what benefits the synergy of society, so the social body is no more obliged to host what harms it than your body is obliged to host a disease. Freedom includes the  freedom to hate, but it is not the freedom to spread hate. There has never been a right to incite hate. One is entitled to an opinion but not to use the megaphone of technology to spread it. People making sensible choices about who is in their domains isn’t censorship, it is the right to ban.

The right to ban. In 1984, New Zealand banned nuclear warships from its ports to express opposition to nuclear weapons. Despite a diplomatic protests, trade continued because free nations don’t have to harbor what they don’t agree with. The same applies today online when discussion boards choose not to harbor trolls. Banning is just asking someone to leave your place, which one is entitled to do. Online social platforms are no more obliged to harbor those who destroy their social fabric than planes are obliged to carry terrorists who want to blow them up. How foolish then for US technology to let Russian agents broadcast hate to interfere with their elections! This is software ignoring social effects. If the Internet is a garden of ideas, social platforms must weed out hate and lies or become echo chambers for conspiracy theories. Do not feed the trolls applies to Facebook and Twitter as it does to all of us. What then when social media act to undermine the society of which they are a part?

Russian trolls used Facebook to lob information bombs at US society

Are social media anti-social? America now knows that social media let foreign agents undermine their democracy in the 2016 election. In February 2018, it indicted 13 Russian citizens and entities like the Internet Research Agency for influencing elections by posting fake news on social media. Facebook let Russian trolls drop information bombs on American society. How did what was a tool to connect people become a tool to divide society on hot-button issues like abortion, religion and politics? To make a profit, Facebook invented targeted ads, the information version of a laser guided missile, where the laser is the AI software that analyzes messages and posts. When people post their likes, dislikes, feelings, political views and travel details, with pictures from all angles of their face, pets and family, they in effect create a dossier on themselves. This lets anonymous “information warriors” not only inject deceptive and hate-filled memes into their feed under the guise of “facts”, but also target them based on age, sex, location and political affiliation. They “weaponized” propaganda on a global scale, so what once helped the Arab Spring now helps the hate speech behind the ethnic cleansing in Myanmar. No surprise that weapon has now backfired on America. The original customers of Facebook, who create its content, are now its product. Facebook technology paints a virtual target on people’s back for its real customers, advertisers, and these new generation ads follow you around. Going into the business of selling people to advertisers  turned Facebook’s platform of ideas into a platform of lies, and turning an information pipeline into an information sewer cannot be described as “doing good”.

What can be done? Is helping a foreign agents cyber-attack your country treason? Does how the attack is done matter? To say information war is not new is to miss the point – that it is harmful. It is a bad look so Facebook is responding, but too little, too late. Closing down sites just closes the stable door after the horse has bolted. Trying to “vet” sources is also futile, as agents can easily open a US shell company to buy ads to target people on divisive topics. Nor will labeling ads, as on TV, help. The right fix, to stop targeted ads, is not an option, so only fake fixes remain. Facebook has over two billion “users” and has bought 71 other companies including WhatsApp and Instagram. When something that big fails, only something bigger can stop it, and that is society. Society is slow to act, but its power to seize, imprison, fine and regulate are not to be trifled with. The question is now not what people want but what society wants, and it doesn’t want discord. To let dirty information infect people’s minds is no different from letting dirty water infect their bodies. When social media become anti-social the dog bites its master, and we all know what happens then.

The Internet is a mirror to humanity. Some say the Internet is making us stupid but online media just expose what we think. The Internet is just an electronic mirror that reflects us, so don’t blame the mirror if it looks bad. It is good because for humanity to change, it must first see itself. Today’s information wars are an opportunity to educate people to recognize and respond to hate and lies. To recognize them, look at the source, where information comes from. Know that one can refuse to harbor hate and call out lies. The temptation is to hate and lie too, but the enemy is not who we hate but hate itself, nor who tells lies but lying itself. Each of us has to think for ourselves, not cherry-pick “alternative facts”. As the Flemish mystic Jan van Ruysbroeck said in the fourteenth century when asked about the wickedness of the inquisition:

We are what we behold, and we behold what we are.

And those who saw devils then still see them today, with technology helping, e.g. if you Google “Dogs are bad” it gives reasons to hate them, while Googling “Dogs are good” gives reasons to love them (try it!). Humanity is now choosing what it thinks online with web-counters keeping the score. When people click, follow, like, friend, comment and post, they are creating the Internet. Technology is not evolving beyond us but is part of human social evolution, an experiment that has been ongoing for thousands of years. The Internet is helping humanity to upgrade itself by making that a necessity.

Next

7.1 A Technology Future?

Technological utopianism is the belief that technical progress always benefits in the long run and so will automatically lead to the end of sickness, hunger, poverty and even death itself. Yet as critics point out, such techno-idealism is not fact-based:

The unfulfilled promise of past technologies rarely bothers the most fervent advocates of the cutting edge, who believe that their favorite new tool is genuinely different from all others that came before. And because popular belief in the world-saving power of technology is often based on myth rather than carefully collected data or rigorous evaluation, it is easy to see why technological utopianism is so ubiquitous: myths, unlike scientific theories, are immune to evidence.” Evgeny Morozov

Delhi smog affects health

The techno-realist approach taken here suggests that technology harms as well as helps. In the area of health, Richard Lear points out that while traditional heart and cancer rates are steady, there has been a rapid rise in the rates of diseases like autism (2094%), Alzheimer’s (299%), diabetes (305%), sleep apnea (430%), celiac disease (1111%), ADHD (819%), asthma (142%), depression (280%), bipolar youth (10833%), osteoarthritis (449%), lupus (787%), inflammatory bowel (120%), chronic fatigue (11027%), fibromyalgia (7727%) and multiple sclerosis (117%) since 1990. This unprecedented explosion of autoimmune, nerve, metabolic and inflammatory diseases in one generation seems related to our use of technology. Doctors argue that sitting at a computer is the new smoking. Others blame technologizing the planet for heat waves, city smog, sea levels, forest fires, hurricanes and droughts. Experts point out that technology is unlikely to fix these problems. So maybe technology is killing us?

The robot in Terminator

The myth of the robot. Technological utopianism goes a step further, claiming that computers will soon surpass humans by Moore’s law, that computer power doubles every eighteen monthsIn Figure 7.3 computers processed as an insect in 2000, as a mouse in 2010 and will exceed the human brain in 2025. In this view, computers are an unstoppable evolutionary juggernaut, but fast-forward to 2018 when while engineers have made tiny robo-flies, computing still struggles to do what flies do with a neuron sliver:

The amount of computer processing power needed for a robot to sense a gust of wind, using tiny hair-like metal probes embedded on its wings, adjust its flight accordingly, and plan its path as it attempts to land on a swaying flower would require it to carry a desktop-size computer on its back.Cornell University

Figure 7.3: The exponential growth of simple processing (Kurzweil, 1999)
Figure 7.4. C3PO from Star Wars

Cornell engineers conclude they will need new “event-based” algorithms running on a non-traditional chip architecture to mimic neural activity. Yet the myth of the robot remains popular science-fiction; e.g. Rosie in The Jetsons, C-3PO in Star Wars and Data in Star Trek are robots that read, talk, walk, converse, think and feel. We do these things easily, so how hard could it be? In movies, we see robots that learn (Short Circuit), reproduce (Stargate’s replicators), think (The Hitchhiker’s Guide’s Marvin), become self-aware (I, Robot), rebel (Westworld) and then replace us (The Terminator, The Matrix). What is not apparent to most is that the myth of the robot is the centuries old myth of the machine re-incarnated in a modern context.

The clockwork universe is a centuries old idea

The myth of the machine. By the 19th century, many felt that the universe was just a clockwork machine where as Laplace said:

We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes. Laplace, A Philosophical Essay on Probabilities

Laplace was saying, in a complex way, that if I knew what every atom in your brain was doing, I could predict your thoughts and acts. In a clockwork universe the future is set, including who you fall in love with, what your job is and when you die. It’s all just physics but a century later physics found that reality isn’t like that at all. Quantum theory killed the clockwork universe because Heisenberg’s uncertainty principle doesn’t let us measure what particles are doing now fully. Quantum events multiply, so to simulate a hundred electrons would take a computer bigger than the earth, see here, so to simulate even a big atom like Uranium would require extra planets! This changes the estimate for when computers will simulate the brain from decades to millions of years. Physics does not support classical computing claims because quantum particles interact exponentially, a phenomenon linguistics calls productivity.

Quantum events explode

Reality is productive. The myth of the machine implies that people are machines, so in psychology Behaviorism argued that people were fully explained by their input and output behavior. Chomskys answer was to note that the productivity of language is such that five year olds can speak more different sentences than they could learn in a lifetime at one sentence per second (Chomsky, 2006), so they couldn’t be just learning behaviors. The myth of the robot fails when possible sentences explode, just as the myth of the machine fails when possible events explode in quantum calculations. A physical explosion is when one atom chemically interacts with others that do the same in rapid escalation and likewise information explodes when choices interact to cause other choices. The same occurs in other fields like genetics and sociology, that when many elements interact the choices soon go beyond classical calculations. That calculations fail when information explodes is illustrated by the 99% barrier.

Reality requires the last one percent

The 99% barrier. The 99% barrier has bugged AI from the beginning, e.g. 99% accurate computer voice translation is one error per minute, which is unacceptable in conversation. Likewise 99% accuracy for robot cars is an accident a day, again unacceptable! Since a good driver’s mean time between accidents (MTBA) is about 20 years, real life requires better. Yet getting that last percentage is inordinately hard, as 100% accuracy is impossible in an indeterminate world. Hence robot successes are either “almost there” like the robofly or involve unreal scenarios like chess or Go. Getting a driverless car to work in an ideal setting is not enough, as the CEO of a Boston-based self-driving car company says:

Technology developers are coming to appreciate that the last 1 percent is harder than the first 99 percent. Compared to last 1 percent, the first 99 percent is a walk in the park.Karl Iagnemma

Driverless cars will be a form of public transport in no-car areas

The reality is that robot car fatal accidents are a matter of when not if. And it has already happened, as when a driverless Uber car killed a jaywalking pedestrian after only 2 million miles versus the US 1.18 deaths per 100 million miles average for people. And how often do human minders have to intervene? The current Waymo rate of every five thousand miles means minders have to pay attention all the time, so the passenger might as well be driving. Add in bad weather, system failures and unusual events and it isn’t hard to see why Uber recently halted self-driving car tests. Realistically, one can expect robo-cars to debut as a form of 25mph public transport in restricted no-car areas, not on main roads. The last 1% will not give up easily!

Figure 7.5 Kim Peek (A) inspired the film Rain Man(B)

Calculating alone can’t solve real life tasks. The brain did not cross the 99% performance barrier by calculating more because it cannot be done. Simple processing power depends on neuron numbers but advanced processing does not. Our brain, as it evolved, found this out by trial and error. In savant syndrome, people who can calculate 20 digit prime numbers in their head need full time care to live in society; e.g. Kim Peek, who inspired the movie Rain Man, could recall every word on every page of over 9,000 books, including all Shakespeare and the Bible, but had to be looked after by his father to live in society (Figure 7.5). He was a calculating genius but in human terms was neurologically disabled, as the higher parts of his brain didn’t develop. Savant brains are brains without the higher sub-systems that allow abstract thought. They calculate better because the brain tried simple processing power in the past and moved on. Computers are electronic savants, calculation wizards that need minders to work in a real world.

Beyond calculations. Von Neumann designed the first computer and almost every computer today still follows his architecture. He made these simplifying assumptions to ensure success:

  • Control: Centralized. All processing is directed from a central processing unit (CPU).
  • Input: Sequential. Input channels are mainly processed in sequence.
  • Output: Exclusive. Output resources are locked for single use.
  • Storage: Location based. Information is accessed by memory address.
  • Initiation: Input driven. Processing is initiated by input.
  • Self-processing: Minimal. Minimize self referential processing to avoid paradoxes and infinities.

How did the brain manage “incalculable” tasks given that it is an information processor? Its trillion (1012) neurons are biological on/off devices powered by electricity, in principle no different from transistors. The brain crossed the 99% performance barrier by taking the design risks Von Neumann avoided, namely:

  • Control: Decentralized. Processing sub-systems work on a first come first served basis.
  • Input: Massively parallel. Input channels are massively parallel, e.g. the optic nerve.
  • Output: Overlaid. Primitive and advanced sub-systems overlap in output control.
  • Storage: Connection based. Information is stored by connection patterns.
  • Initiation: Process driven. Processing initiates, to hypothesize and predict.
  • Self-processing: Maximal. Processing of processing gives us a “self” to allow social activity.
The brain evolved to handle reality not to calculate it

Super-computers can maximize processing by running NVidia graphic cards in parallel but the brain went beyond that. Its biological imperative was to deal with uncertainty not to calculate it. Technical utopianism assumes that all processing is simple processing but by repeated processing upon processing the brain came to construct a self, others and a community, the same constructs that human and computer savants struggle with. Instead of an inferior biological computer, the brain is a different type of processor, so today’s super-computers aren’t even in the same processing league as the brain. Computers calculate better than us as cars travel faster and cranes lift more, but the brain evolved evolved to handle reality not to calculate it. If today’s computers excel at the sort of calculating the brain outgrew millions of years ago, how are they the future? AI will never surpass HI (Human Intelligence) because it is not even going in the same direction.  

Next