2.2 Design Requirements

The aim of system design is to find problems early, e.g. a misplaced wall on an architect’s plan can be moved by the stroke of a pen, but once the wall is built, changing it is not so easy. Yet to design a thing, its performance requirements must be known. Doing this is the job of requirements engineering, which analyzes stakeholder needs to specify what a system must do in order that the stakeholders will sign off on the final product. It is basic to system design:

The primary measure of success of a software system is the degree to which it meets the purpose for which it was intended. Broadly speaking, software systems requirements engineering (RE) is the process of discovering that purpose...”Nuseibeh & Easterbrook, 2000: p. 1

A requirement can be a particular value (e.g. uses SSL), a range of values (e.g. less than $100), or a criterion scale (e.g. is secure). Given a system’s requirements designers can build it, but the computing literature cannot agree on what the requirements are. One text has usability, maintainability, security and reliability (Sommerville, 2004, p. 24) but the ISO 9126-1 quality model has functionality, usability, reliability, efficiency, maintainability and portability (Losavio et al., 2004).

Berners-Lee made scalability a World Wide Web criterion (Berners-Lee, 2000) but others advocate open standards between systems (Gargaro et al., 1993). Business prefers cost, quality, reliability, responsiveness and conformance to standards (Alter, 1999), but software architects like portability, modifiability and extendibility (de Simone & Kazman, 1995). Others espouse flexibility (Knoll & Jarvenpaa, 1994) or privacy (Regan, 1995). On the issue of what computer systems need to succeed, the literature is at best confused, giving what developers call the requirements mess (Lindquist, 2005). It has ruined many a software project. It is the problem that agile methods address in practice and that this chapter now addresses in theory.

In current theories, each specialty sees only itself. Security specialists see security as availability, confidentiality and integrity (OECD, 1996), so to them, reliability is part of security. Reliability specialists see dependability as reliability, safety, security and availability (Laprie & Costes, 1982), so to them security is part of a general reliability concept. Yet both cannot be true. Similarly, a usability review finds functionality and error tolerance part of usability (Gediga et al., 1999) while a flexibility review finds scalability, robustness and connectivity to be aspects of flexibility (Knoll & Jarvenpaa, 1994). Academic specialties usually expand to fill the available theory space, but some specialties recognize their limits:

The face of security is changing. In the past, systems were often grouped into two broad categories: those that placed security above all other requirements, and those for which security was not a significant concern. But … pressures … have forced even the builders of the most security-critical systems to consider security as only one of the many goals that they must achieve.”  Kienzle & Wulf, 1998: p5

The obvious conclusion is that analyzing performance goals in isolation is now giving diminishing returns.

Next

2.1 The Elephant in the Room

Many blind men examine an elephant, and each gets a different idea of what it is.

The beast of computing has regularly defied pundit predictions. Key advances like the cell-phone (Smith et al., 2002) and open-source development (Campbell-Kelly, 2008) were not predicted by the experts of the day, though the signs were there for all to see. Experts were pushing media-rich systems even as lean text chat, blogs, texting and wikis took off. Even today, people with smart-phones still send text messages. Google’s simple white screen scooped the search engine field, not Yahoo’s multi-media graphics. The gaming innovation was social gaming, not virtual reality helmets as the experts predicted. Investors in Internet bandwidth lost money when users did not convert to a 3G video future. Cell phone companies are still trying to get users to sign up to 4G networks.

In computing, the idea that practice leads but theory bleeds has a long history. Over thirty years ago, paper was declared “dead”, to be replaced by the electronic paperless office (Toffler, 1980). Yet today, paper is used more than ever before. James Martin saw program generators replacing programmers, but today, we still have a programmer shortage. A “leisure society” was supposed to arise as machines took over our work, but today we are less leisured than ever before (Golden & Figart, 2000). The list goes on: email was supposed to be for routine tasks only, the Internet was supposed to collapse without central control, video was supposed to replace text, teleconferencing was supposed to replace air travel, AI smart-help was supposed to replace help-desks, and so on.

We get it wrong time and again, because computing is the elephant in our living room. We cannot see it because it is too big. In the story of the blind men and the elephant, one grabbed its tail and found the elephant like a rope and bendy, another took a leg and declared the elephant was fixed like a pillar, a third felt an ear and thought it like a rug and floppy, while the last seized the trunk, and found it like a pipe but very strong (Sanai, 1968). Each saw a part but none saw the whole. This chapter outlines a holistic vision of the many dimensions of the elephant of computing.

Next

 

2. Design Spaces

While the previous chapter described computing system levels, this chapter describes how the dimensions of computing performance create a design space.     Next

2.1 The Elephant in the Room

2.2 Design Requirements   

2.3 What is a Design Space?

2.4 Non-functional Requirements   

2.5 Holism and Specialization   

2.6 Constituent Parts   

2.7 Requirements Engineering   

2.8 The Web of System Performance

2.9 Design Tensions and Innovation   

2.10 Project Development   

Chapter 2. Discussion Questions   

Chapter 2. References   

Chapter 1. References

Bertalanffy, L. V. (1968). General System Theory. New York: George Braziller Inc

Bone, J. (2005). The social map and the problem of order: A re-evaluation of ‘Homo Sociologicus‘. Theory & Science, 6(1).

Boulding, K. E. (1956). General systems theory – the skeleton of a science. Management Science, 2(3), 197-208.

Davis, F. D. (1989) “Perceived usefulness, perceived ease of use, and user acceptance of information technology,” Management Information Systems Quarterly, 13 (3).

Diamond, J. (1998) Guns, Germs and Steel. London: Vintage.

Kant, I. (1781/2002) Critique of Pure Reason. In M. C. Beardsley, (Ed) The European Philosophers from Descartes to Nietzsche. New York: The Modern Library.

Miller, G. A. (1956). “The magical number seven, plus or minus two: Some limitations on our capacity for processing information,” Psychology Review. 63, pp. 81–97.

Norman, D. A. (1990). The Design of Everyday Things. New York: Bantam Doubleday.

Penrose, R. (2005). The Road to Reality: A Complete Guide to the Laws of the Universe. London: Jonathan Cape.

Raymond, I. S. (1999). The Cathedral and the Bazaar Cambridge, MA: O’Reilly Media.

Sanders, M. S., & McCormick, E. J. (1993). Human Factors in Engineering and Design. NY: McGraw-Hill.

Shannon, C. E., & Weaver, W (1949). The Mathematical Theory of Communication. Urbana: University of Illinois Press.

Skinner, B. F. (1948). “’Superstition’ in the pigeon”. Journal of Experimental Psychology, 38, 168-172.

Whitworth, B. (2006). Measuring disagreement. In Rodney A. Reynolds, Robert Woods, & Jason D. Baker (Eds).Handbook of Research on Electronic Surveys and Measurements(Chapter XI), Idea Group Reference (an imprint of Idea Group Inc.) ISBN: 1-59140-792-3 Publisher: Information Science Reference.

Whitworth, B. (2009). The social requirements of technical systems. In B. Whitworth & A. De Moor (Eds.). Handbook of Research on Socio-technical Design and Social Networking Systems. (Chapter 1). Hershey, PA: IGI .

Whitworth, B. (2011). “The Virtual Reality Conjecture,” PrespacetimeJournal, 2, (9) pp. 1404–1433.

Whitworth, B., &. deMoor, A. (2003). “Legitimate by design: Towards trusted virtual community environments,” Behaviour & Information Technology, l. 22 (1). 31–51.

Whitworth, B., & Friedman, R. (2009). “Reinventing academic publishing online Part I: Rigor, Relevance and Practice,” First Monday, 14, (8).

Whitworth, B. & Whitworth, A. (2010). “The Social Environment Model: Small Heroes and the Evolution of Human Society,” First Monday, 15, (11).

Wilson, E. O. (1975). Sociobiology: The New Synthesis. Cambridge, MA: Harvard University Press.

 

Chapter 1. Discussion Questions

Research questions from the list below and give your answer, with reasons and examples. If you are reading this chapter as part of a class – either at a university or in a commercial course – work in pairs then report back to the class.

1)   How has computing evolved since it began? Is it just faster machines and better software? What is the role of hardware companies like IBM and Intel in modern computing?

2)   How has the computing business model changed as it evolved? Why does selling software make more money than selling hardware? Can selling knowledge make even more money? What about selling friendships? Can one sell communities?

3)   Is a kitchen table a technology? Is a law a technology? Is an equation a technology? Is a computer program a technology? Is an information technology (IT) system a technology? Is a person an information technology? Is an HCI system (person plus computer) an information technology? What, exactly, is not a technology?

4)   Is any set of people a community? How do people form a community? Is a socio-technical system (an online community) any set of HCI systems? How do HCI systems form an online community?

5)   Is computer science part of engineering or of mathematics? Is human computer interaction (HCI) part of engineering, computer science or psychology? Is socio-technology part of engineering, computer science, psychology or one of the social sciences (like, sociology, history, political science, anthropology, ancient history, etc.)?

6)   In an aircraft, is the pilot a person, a processor, or a physical object? Can one consistently divide the aircraft into human, computer and mechanical parts? How can one see it?

7)   What is the reductionist dream? How did it work out in physics? Does it recognize computer science? How did it challenge psychology? Has it worked out in any discipline?

8)   How much information does a physical book, that is fixed in one way, by definition, have? If we say a book “contains” information, what is assumed? How is a book’s information generated? Can the same physical book “contain” different information for different people? Give an example.

9)   If information is physical, how can data compression put the same information in a physically smaller signal? If information is not physical, how does data compression work? Can we encode more than one semantic stream into one physical message? Give an example.

10)   Is a bit a physical “thing”? Can you see or touch a bit? If a signal wire sends a physical “on” value, is that always a bit? If a bit is not physical, can it exist without physicality? How can a bit require physicality but not itself be physical? What creates information, if it is not the mechanical signal?

11)   Is information concrete? If we cannot see information physically, is the study of information a science? Explain. Are cognitions concrete? If we cannot see cognitions physically, is the study of cognitions (psychology) a science? Explain. What separates science from imagination if it can use non-physical constructs in its theories?

12)   Give three examples of other animal species who sense the world differently from us. If we saw the world as they do, would it change what we do? Explain how seeing a system differently can change how it is designed. Give examples from computing.

13)   If a $1 CD with a $1,000 software application on it is insured, what do you get if it is destroyed? Can you insure something that is not physical? Give current examples.

14)   Is a “mouse error” a hardware, software or HCI problem? Can a mouse’s hardware affect its software performance? Can it affect its HCI performance? Can mouse software affect HCI performance? Give examples in each case. If a wireless mouse costs more and is less reliable, how is it better?

15)   Give three examples of a human requirement giving an IT design heuristic. This is HCI. Give three examples of a community requirement giving an IT design heuristic. This is STS.

16)   Explain the difference between a hardware error, a software error, a user error and a community error, with examples. What is the common factor here?

17)   What is an application sandbox? What human requirement does it satisfy? Show an online example.

18)   Distinguish between a personal requirement and community requirement in computing. Relate to how STS and HCI differ and how socio-technology and sociology differ. Are sociologists qualified to design socio-technical systems? What about HCI experts?

19)   What in general do people do if their needs are not met in a physical situation? What do users do if their needs are not met online? Is there a difference? Why or why not? What do citizens of a physical community do if it does not meet their needs? What about an online community? Again, is there a difference? Give specific examples to illustrate.

20)   According to Norman, what is ergonomics? What is the difference between ergonomics and HCI? What is the difference between HCI and STS?

21)   Give examples of the following: Hardware meeting engineering requirements. Hardware meeting Computer Science requirements. Software meeting CS requirements. Hardware meeting psychology requirements. Software meeting psychology requirements. People meeting psychology requirements. Hardware meeting community requirements. Software meeting community requirements. People meeting community requirements. Communities meeting their own requirements. Which of these are computing design?

22)   Why is an IPod so different from TV or video controls? Which is better and why? Why has TV remote design changed so little in decades? If TV and the Internet compete for the hearts and minds of viewers, which one will win?

23)   How does an online friend differ from a physical friend? Can friendships transcend physical and electronic interaction architectures? Give examples. How is this possible?

24)   How available are academic papers? Pick 10 important journal papers and using non-university browsing, try to access them for free. How many author home pages offer their own papers for free download? Should journals be able to copyright papers they neither wrote nor paid for?

Why do universities divide computing research across many disciplines? What is a cross-discipline? What past cross-disciplines became disciplines. Why is computing a cross-discipline?

1.10 The Flower of Computing

Figure 1.10: The four stages of computing

Figure 1.10 shows how computing evolved through the four stages of hardware, software, people and community. At each stage, a new specialty joined computing, but pure engineers still see only mechanics, pure computer scientists only information, pure psychologists only human constructs, and pure sociologists only social structures. Yet the multi-discipline of computing as a whole is not pure, because purity is not the future. It is more akin to a bazaar than a cathedral, as computer practitioners understand (Raymond, 1999).

In academia, computing struggles because academics must specialize to get publications, grants and promotions (Whitworth & Friedman, 2009), so discipline specialties guard their knowledge in journal castles with jargon walls. Like medieval fiefdoms, they hold hostage knowledge that by its nature should be free. The divide and conquer approach of reductionism does not allow computing to prosper as an academic multi-discipline.

In practice, however, computing is thriving. Every day more people use computers to do more things in more ways, but in the universities engineering, computer science, health, business, psychology, mathematics and education compete for the computing crown. For example, health created its own field of informatics, with separate journals, conferences and courses, to meet its non-engineering/non-business computing needs. Computing is the Afghanistan of academia, often invaded but never conquered. It should be the Singapore, a knowledge trade center. The kingdom of research into computing is currently weak because it is a realm divided. It will get weaker if music, art, journalism, architecture etc. also add outposts. Computing researchers are scattered over the academic landscape like the tribes of Israel, some in engineering, some in computer science, some in health, etc. Yet we are one.

Figure 1.11: The flower of computing

The flower of computing is the fruit of many disciplines but it belongs to none. It is a new multi-discipline in itself (Figure 1.11). For it to bear research fruit, its discipline parents must release it. Using different terms, models and theories for the same subject just invites confusion. Universities that compartmentalize computing research into isolated discipline groups deny its multi-disciplinary future. As modern societies federate states and nations, so the future of computing is as a federation of disciplines. Until computing research unifies, it will remain as it is now — a decade behind computing practice.

Next

 

1.9 Design Level Combinations

Figure 1.8a: Apple controls meet human requirements.

The idea of computing levels makes system design complex, as the design requirements of a higher level “flow down” to affect levels below it. This gives us a variety of design fields, as follows.

1. Ergonomics designs safe and comfortable machines for people. Applying biological needs, such as avoiding posture and eye-strain, to technology design merges biology and engineering.

2. Object design applies psychological needs to technology in the same way (Norman, 1990): e.g. a door’s design affects whether it is pushed or pulled. An affordance is a physical object feature that cues its human use, as buttons cue pressing. Physical systems designed with affordances based on human requirements perform better. In World War II, aircraft crashed until engineers designed cockpit controls with the cognitive needs of pilots in mind, as follows (with computing examples):

Figure 1.8b: TV controls meet engineering requirements

   Put the control by the thing controlled, e.g. a handle on a door (context menus).

   Let the control “cue” the required action, e.g. a joystick (a 3D screen button).

   Make the action/result link intuitive, e.g. press a joystick forward to go down, (press a button down to turn on).

   Provide continuous feedback, e.g. an altimeter, (a web site breadcrumbs line).

   Reduce mode channels, e.g. altimeter readings, (avoid edit and zoom mode confusions).

   Use alternate sensory channels, e.g. warning sounds, (error beeps).

   Let pilots “play”, e.g. flight simulators, (a system sandbox).

3. Human computer interaction applies psychology requirements to screen design. Usable interfaces respect cognitive needs, e.g. by the nature of human attention, users do not usually read the entire screen. HCI turns psychological needs into IT designs as architecture turns buyer needs into house designs. Compare Steve Jobs’ IPod to a television remote (Figure 1.8). Both are controls, but one is a cool tool and the other a mass of buttons. If one was designed to engineering requirements and the other to HCI requirements, which performs better?

4. Fashion is based on the social need to look good applied to wearable object design. In computing, a mobile phone can be a fashion accessory, just like a hat or handbag. Its role is to impress, not just to function. Aesthetic criteria apply when people buy mobile phones to be trendy or fashionable, so color can be as important as battery life in mobile phone design.

5. Socio-technology is information technology that meets social requirements. Anyone online can see its power, but most academics see it as an aspect of their specialty, rather than a new multi-discipline in its own right. As computing evolved a social level, social requirements became part of computing design (Sanders & McCormick, 1993).

Multi-disciplinary fields cannot, by their nature, be reduced to component discipline specialties; e.g. sociologists study society not technology, and technologists study technology not society, so neither can address socio-technical design — how social needs impact technical design. Table 1.3 summarizes design fields by level combination.

Design

Requirements

Target

Examples

STS

Social

IT

Wikipedia, YouTube, E-bay

Fashion

Social

Physical Accessory

Mobile phone as an accessory

HCI

Psychological

IT

Framing, border contrast, richness

Design

Psychological

Technology

Keyboard, mouse

Ergonomics

Biological

Technology

Adjustable height screen

Table 1.3: Design fields by target and requirement levels

Figure 1.9: How computing levels relate to computing requirements

In Figure 1.9, higher level requirements flow down to lower level design, giving a higher affects lower design principle. Higher levels direct lower ones how to improve system performance, as the requirements of each level flow down to those below, e.g. communities that create normative influence at the citizen level, create laws at the informational level, and cultural events at the physical level. The same applies online, as online communities make demands of Netizens as well as software. STS design therefore is about having it all: reliable devices, efficient code, intuitive interfaces and sustainable communities. Ultimately social requirements such as privacy and freedom will affect interface design, how software is written and even how hardware is built.

Note that the social level is open ended, as social groups form higher social groups, e.g. in physical society, over thousands of years, families formed tribes, tribes formed city states, city-states formed nations and nations formed nations of nations, each with more complex social structures (Diamond, 1998). How social units combine into higher social units with new requirements is discussed further in Chapter 5, where the social unit of analysis can be a person, a friend dyad, a group, a tribe, etc.

So it is naive to think that friend systems like Facebook are the last step, that social computing will stop at a social unit size of two. Beyond friends are tribes, cities, city-states, nations and meta-nations like the USA. Since we have a friend but belong to a community, the rules also change. With the world population at seven billion and growing, Facebook’s over a billion active accounts are just the beginning. The future is computer support not just for friends, but also for families, tribes, nations and even global humanity.

For example, imagine a group browser, designed for many not just one, so that people can browse the Internet in groups, discussing as they go. Instead of a physical tour bus we will have an informational tour “bus”. It can have a “driver” who comments along the way: “This site shows how the Internet began …”. Or members could take turns to host the next site, showing what they like. The possibilities of social computing are just beginning.

Next

1.8 The Requirements Hierarchy

Figure 1.7: Computing levels imply a requirements hierarchy

The evolution of computing implies a requirements hierarchy (Figure 1.7). If the hardware works, then software becomes the priority; if the software works, then people’s needs become important; and when people’s needs are fulfilled, then social requirements arise. As one level‘s issues are met, those of the next appear, just as climbing one hill reveals another. As hardware over-heating problems are solved, software data locking problems arise. As software response times improve, user response times become the issue. Companies like Google and E-bay still seek customer satisfaction, but customers in crowds have community needs like fairness, i.e. higher levels come to drive success.

In general, the highest system level defines its success; e.g. social networks need a community to succeed. If no community forms, it does not matter how easy to use, fast or reliable the software is. Lower levels are necessary to avoid failure but not sufficient to define success.

Level

Requirements

Errors

Community

Reduce community overload, clashes. Increase productivity, synergy, fairness, freedom, privacy, transparency.

Unfairness, slavery, selfishness, apathy, corruption, lack of privacy.

Personal

Reduce cognitive overload, clashes. Increase meaning transfer efficiency.

User misunderstands, gives up, is distracted, or enters wrong data.

Informational

Reduce information overload, clashes. Increase data processing, storage, or transfer efficiency

Processing hangs, data storage full, network overload, data conflicts.

Mechanical

Reduce physical heat or force overload. Increase heat or force efficiency.

Overheating, mechanical fractures or breaks, heat leakage, jams.

Table 1.2: Computing errors by system level

 Conversely, any level can cause failure; it does not matter how strong the community is if the hardware fails, the software crashes or the interface is unusable. An STS fails if its hardware fails, if its program crashes or if users cannot figure it out. Hardware, software, personal and community failures are all computing errors (Table 1.2). The common feature is that the system fails to perform and in evolution what does not perform, does not survive.

Each level emerges from the previous but fails differently:

  • Hardware systems based on physical energy exchange fail from problems like overheating.
  • Software systems based on information exchange fail from problems like infinite loops.
  • HCI systems based on meaning exchange fail from problems like misunderstanding or information overload.
  • Socio-technical systems based on normative meme exchange fail from problems like mistrust, unfairness and injustice.

Computing as technology fails for technical reasons but, as socio-technology it also fails for social reasons.

Technology is hard, but society is soft. That the soft should direct the hard seems counter-intuitive, but trees grow at their soft tips more than at their hard base. As a tree trunk does not direct its expanding canopy, so today’s social computing advances were undreamt of by its engineering base. Today’s technology designers will find the future of design in level combinations.

Next

1.7 The Reductionist Dream

The Clockwork Universe is an old 19th century idea that hasn’t worked

Before going on, we review the opposing theory of reductionism, which states that there is really only one level, namely physical reality, so everything can reduce to it. How has this worked in science?

The reductionist dream is based on logical positivism, the idea that only the physical exists so all science must be expressed in physical terms. Logical positivism is a nineteenth century meta-physical position stating that all science involves only physical observables. In psychology, it led to Behaviorism (Skinner, 1948) which is now largely discredited (Chomsky, 2006). Science is not a way to prove facts, but a way to use world feedback to make best guesses. Yet when Shannon and Weaver defined information as a choice between physical options, the options were physical but the choosing was not (Shannon & Weaver, 1949). A message physically fixed in one way has by this definition zero information because the other ways it could have been fixed do not exist physically. An on/off voltage choice is one bit, but a physical signal alone is no information, hence hieroglyphics that cannot be read have in themselves no information at all.

If reader choices generate information, the data in a physical signal is unknown until it is deciphered. Data compression fits the same data in a physically smaller signal by encoding it more efficiently. It could not do this if information was fully defined by the physical message. The physical level is necessary for the information level but it is not sufficient. Conversely, information does not exist physically, as it cannot be touched or seen.

So if the encoding is unknown, the information is undefined; e.g. an electronic pulse sent down a wire could be a bit, or a byte (an ASCII “1”), or, as the first word of a dictionary, say “aardvark”, be many bytes. The information a message conveys depends on the decoding process; e.g. every 10th letter of this text gives an entirely new (and nonsensical) message.

One response to reductionism is mathematical realism, that mathematical laws are real even if they are not concrete (Penrose, 2005). Mathematics is a science because its constructs are logically correct, not because they are physical. That an equation is later physically useful is not the cause of its reality. Reality is now a consensual construct, with physicality just one option. Likewise in psychology, Skinner’s attempt to reduce all cognitions to physical behavior did not work and has been replaced by cognitive realism, that cognitions are also real.

The acceptance of mathematical and cognitive constructs does not deny science, because science only requires that theory constructs be validated empirically, i.e. by a physical measure, not that they be physical. For example, fear as a cognitive construct can be measured by heart rate, pupil dilation, blood pressure, a questionnaire, etc. Empirical means derived from the physical world, so mental constructs with no physical referent, like love, are not inherently empirical.

Even physics cannot reduce its theories to pure physicality, as quantum theory implies a primordial non-physical quantum level of reality below the physical (Whitworth, 2011). For example, quantum collapse ignores the speed of light limit and quantum waves travel many paths at once. In physics, reductionism gave a clockwork universe where each state perfectly defined the next, as in a computer. Quantum physics flatly denied this, as random quantum events by definition are explained by no physical history. The quantum world cannot be reduced to physical events. Either quantum theory is wrong or reductionism does not work! If all science were physical, all science could be reduced to physics, which it cannot.

A reductionist philosophy that has failed in science in general is hardly a good base for a computing model. If the physical level were sufficient alone, there would be no choices and so no information, i.e. reductionism denies information science. As the great 18th century German philosopher Kant argued long ago, we see an object, or phenomenon, as a view, but don’t see the thing in itself (Kant, 1781/2002). He called the “thing in itself” the noumenon, as opposed to the phenomenon, or view we see. A bat or a bee would see the world differently from us. It is egocentrism to assume the world is only as we see it. Following Kant’s model, the different disciplines of science are just different ways to view the same unspecified reality. Levels return the observer to science, as quantum theory’s paradoxes demand.

Currently, sociology sees individuals as conduits of meaning that reflect external social structures, and so psychological, biological, and physical views are the faulty reductionism of social realities. In this social determinism, society writes social agendas, such as communism or capitalism, upon individual tabulae rasae (blank slates). Yet this just replaces the determinism of fields like biology (Wilson, 1975) and psychology (Skinner, 1948) by another form of determinism.

Figure 1.5 Computing has levels

By contrast, in the general system model of computing shown in Figure 1.5, each level emerges from the previous. So if all individual thoughts were erased, society would also cease to exist as surely as if all its citizens had vanished physically. Sociology assumes psychology, which has led to attempts to re-attach it to its psychological roots, e.g. Bourdieu’s habitus references individual cognitions of the social environment and Gidden’s mental frames underlie social life (Bone, 2005). The top-down return of sociology to its source matches an equally vibrant bottom-up movement in computing, which has long seen itself as more than hardware and software (Boulding, 1956).

Next

1.6 From People to Communities

Even as HCI develops into a traditional academic discipline, computing has already moved on to add sociology to its list of paramours. Socio-technical systems use the social sciences in their design as HCI interfaces use psychology. STS is not part of HCI, nor is sociology part of psychology, because a society is more than the people in it; e.g. East and West Germany, with similar people, performed differently as communities, as do North and South Korea today. A society is not just a set of people. People who gather to view an event or customers shopping for bargains are an aggregate, not a community. They only become a community when they see themselves as one, i.e. the community level arises directly from personal level cognitions.

Social systems can have a physical base or a technical base, so a socio-physical system is people socializing by physical means. Face-to-face friendships cross seamlessly to Facebook because the social level persists across physical and electronic architecture bases. Whether electronically or physically mediated, a social system is always people interacting with people. Electronic communication may be “virtual” but the people involved are real.

Figure 1.6: The generations of computing disciplines

Online communities work through people, who work through software that works through hardware. While sociology studies the social level alone, socio-technical design studies how social, human, information and hardware levels interact. A sociologist can no more design socio-technologies than a psychologist can design human-computer interfaces. STS and HCI need computer-sociologists and computer-psychologists. The complexity of modern computing arises from its discipline promiscuity (Figure 1.6).

Next