3.7 A Media Framework

Table 3.1 below categorizes various communication media by message richness and linkage with electronic media in italics, e.g. a phone call is an inter-personal audio but a letter is interpersonal text. A book is a document broadcast, radio is broadcast audio and TV is broadcast video. The Internet can broadcast documents (web sites), audio (podcasts) or videos (YouTube). Email allows one-way interpersonal text messages, while Skype adds two-way audio and video. Chat is few-to-few matrix text communication, as is instant messaging but with known people. Blogs are text broadcasts that also allow comment feedback. Online voting is matrix communication as many communicate with many in one operation.

Linkage

Richness

Broadcast

Interpersonal

Matrix

Position

Footprint,Flare,Scoreboard,Scream,

Posture,Gesture,Nodding,Salute,Smiley

Show of hands, Applause,An election,Web counter, Karma system, Tag cloud, Online vote, Reputation system, Social bookmarks

Document

Poster, Book,Web site, Blog,Online photo, News feed,Online review,Instagram,Twitter (text)

Letter,Note,Email, Texting, Instant messageSocial network

Chat, Twitter, Wiki, E-market, Bulletin board,Comment system,Advice board,Social network

Audio

Radio,Loud-speaker,Record, CD, Podcast, Online music

Telephone,Answer-phone,Cell phone,Skype

Choir,Radio talk-back,Conference call,Skype conference call

Multi-media (Video)

Speech, Show,Television, Movie, DVD,YouTube

Face-to-face conversation,Chatroulette, Video-phone,Skype video

Face-to-face meeting, Cocktail party,Video-conference, MMORPGSimulated world

Table 3.1: Communication media by richness and linkage

Note that Twitter is both broadcast (text) and matrix (when people “follow” others), and that social networks are also both interpersonal and matrix communication

Computers are said to allow anytime, anywhere communication, but while asynchronous email communication lets senders ignore distance and time,synchronous communication like Skype does not, as one cannot call someone who is sleeping. The world is flat as Friedmann says but the day is still round. The main contribution of technology is allowing communication for less effort, e.g. an email is easier to send than posting a letter. Lowering the message threshold means that more messages are sent (Reid et al., 1996). Email stores a message until the receiver can view it, but a face-to-face message is ephemeral; it disappears if you are not there to get it. Yet being unable to edit a message sent makes sender state streams like tone of voice more genuine.

Media richness theory framed communication in media richness terms, so electronic communication was expected to move directly to video, but that was not what happened. EBay’s reputations, Amazon’s book ratings, Slashdot’s karma, tag clouds, social bookmarks and Twitter are not rich at all. Table 3.1 shows that computer communication evolved by linkage as well as richness. Computer chat, blogs, messaging, tags, karma, reputations and wikis are all high linkage but low richness.

Communication that combines high richness and high linkage is interface expensive, e.g. a face-to-face meeting lets rich channels and matrix communication give factual, sender state and group state information. The medium also lets real time contentions between people talking at once be resolved naturally. Everyone can see who the others choose to look at, so whoever gets the group focus continues to speak. To do the same online would require not only many video streams on every screen but also a mechanism to manage media conflicts. Who controls the interface? If each person controls his or her own view there is no commonality, while if one person controls the view like a film editor there is no group action. Can electronic groups act democratically like face-to-face do?

In audio-based tagging, the person speaking automatically makes his or her video central (Figure 3.6). The interface is common but it is group-directed, i.e. democratic. Gaze-based tagging is the same except that when people look at a person, that person’s window expands on everyone’s screen. It is in effect a group directed bifocal display (Spence and Apperley, 2012). When matrix communication is combined with media richness, online meetings will start to match face-to-face meetings in terms of communication performance.

Figure 3.6:Audio based video tagging

A social model of communication suggests why video-phones are technically viable today but video-phoning is still not the norm. Consider its disadvantages, like having to shave or put on lipstick to call a friend or having to clean up the background before calling Mum. Some even prefer text to video because it is less rich, as they do not want a big conversation.

In sum, computer communication is about linkage as well as richness because communication acts involve senders and receivers as well as messages. Communication also varies by system level, as is now discussed.

Next

3.6 Communication Performance

Online communication

Communication is the transmission of meaning between senders and receivers, where meaning is any change in a person’s thoughts, feelings or motives. The communication performance of a system is the total amount of meaning exchanged, i.e. its human impact. Whether online or off, it depends on two factors:

1)  Message richness. The amount of meaning a message conveys.

2)  Sender-receiver linkage. The number of people and directions in a communication link.

Message richness increases communication performance by increasing the amount of meaning transferred per message. It is not to be confused with media richness, the bandwidth of the communication medium. To suppose video is always richer than text is to confuse the meaning level and the information level. Meaning is the human impact, so texting “I’m safe” can have more meaning than a multi-media sales video if one is overwhelmed to hear that a loved one is safe but indifferent to a sales pitch. Yet media bandwidth does affect the message type as follows (from simple to complex):

a.   Position. A single, static symbol, e.g. to raise one’s hand. An online vote is a position.

b.   Document. Many static symbols that form meaning patterns as words form a sentence by syntax or pixels form an object by Gestalt principles. Documents are text or pictures.

c.    Audio. A dynamic channel with multiple semantic streams, as speech has tone of voice and content. A semantic stream is meaning produced by a type of processing, so one physical channel processed differently can have many semantic streams, e.g. tone of voice and message content.

d.  Multi-media. Many dynamic channels, e.g. video is both audio and visual channels, and face-to-face communication uses many sensory channels.

More complex messages that can transfer more meaning require more channel bandwidth.

Linkage increases communication performance by increasing the senders and receivers in a message link (Figure 3.4).

Figure 3.4: Communication linkage (S = Sender, R = Receiver)

 The most common types of communication linkage are:

a.   Interpersonal (one-to-one, two-way): Both parties can send and receive, usually signed.

b.   Broadcast (one-to-many, one-way): From one sender to many receivers, can be unsigned.

c.   Matrix (many-to-many, two-way): Many senders to many receivers, usually unsigned.

Communication performance depends on linkage, as an interpersonal message sent to one person by say email impacts just one person, but when two people chat the communication goes both ways. An email posted on Facebook that is broadcast to many people again increases the human impact. People communicate with people at the personal level but there is an even more powerful communication available at the community level. Matrix communication is group-to-group, i.e. many send and many receive in one transmit operation. It combines one-to-many (broadcast) plus many-to-one (merge) communication, e.g. an audience applauding a speaker is many-to-many, as the group producing the clapping message also receives it. Matrix communication allows normative influence, e.g. when audiences start and stop clapping together. A choir singing is also matrix communication, so when choirs go off key they usually do so together.

Figure 3.5: Tagging is matrix communication

Face-to-face groups also use matrix communication, as body language and facial expressions convey each member of the group’s position on an issue to everyone else in the group. A valence index calculated from member position indicators was found to predict a group discussion outcome as well as the words (Hoffman & Maier, 1961). Matrix communication is how online electronic groups form social agreement without any rich information exchange or discussion, using only anonymous, lean signals, (Whitworth et al., 2001). Community voting, as in an election, is a physically slow matrix communication that computers can speed up. Tag cloud, reputation system and social book-mark technologies all illustrate online support for matrix communication (Figure 3.5).

If communication performance is richness plus linkage, a regime bombarding citizens 24/7 with one-way broadcast TV/video propaganda can exchange less meaning than people talking freely by twitter, which is many-to-many linkage.

Next

 

3.5 The Web of Social Performance

Figure 3.3: The social web of performance

Communities interact with others, using spies to act as “eyes”, diplomats to communicate, engineers to effect, soldiers to defend, intellectuals to adapt and traders to extend, but a community can also interact with itself, to communicate or synergize, as follows (Figure 3.3):

1)   Productivity. As functionality is what a system can produce, so communities produce bridges, art and science by citizen competence, based on education of knowledge, ethics and tacit skills. Help and FAQ systems illustrate this for an online community.

2)   Synergy. As usability is less effort per result, so communities improve efficiency by synergy based on trust. In a society, if everyone gives, everyone gets, but if everyone takes, everyone is taken from. Public goods like roads and hospitals involve functional specialists giving to all. If all citizen specialists offer their services to others, all get more for much less effort. Wikipedia illustrates online synergy, as many specialists give a little knowledge so that all get a lot of knowledge.

3)   Freedom. As flexibility is a system’s ability to change to fit a changing environment, so communities become flexible when citizens have freedom, i.e. the right to act autonomously. This is the right not to be a slave. The problem with competition is that if you give peanuts you get monkeys but if you give honey you get wasps. Freedom allows local resource control, which increases social performance just as decentralized protocols like Ethernet improve network performance.

4)   Order. Reliability is a system’s ability to survive internal part failure or error. A community achieves reliability through order, when citizens, by rank, role or job, know and do their duty. Some cultures set up warrior or merchant castes to achieve this. Online order is also by roles, e.g. Sysop or Editor.

5)   Ownership. Security is a system’s defense against outside takeover. A community is made secure internally by ownership, as to “own” a house guarantees that if another takes it, the community will step in. State ownership, as in communism or tribalism, is still ownership. Only if a state gives its ownership away do citizens get freedom Online, ownership works by access control (see Chapter 6).

6)   Openness. Extendibility is a system’s ability to use what is outside itself. A community doing this is illustrated by America’s invitation to the world:

“Give me your tired, your poor, your huddled masses yearning to breathe free”.

A society is open internally if any citizen can achieve any role by merit, just as Abraham Lincoln, born in a log cabin, became US president. The opposite is nepotism or cronyism, giving jobs to family or friends regardless of merit. If community advancement is by who you know, not what you know, performance reduces. Open source systems like Source Forge let people advance by merit.

7)   Transparency. As connectivity lets a person communicate with others, so transparency lets a community communicate with itself by media like TV, newspapers, radio and now the Internet. In a transparent community, people can easily see what the group is doing but in an opaque one they cannot. Transparent governance lets citizens view public money spent and privileges given, as public acts on behalf of a community are not private. Systems like Wikipedia illustrate transparency online.

8)   Privacy. Privacy as a citizen’s right to control their personal information can also describe a community right given to all citizens, i.e. the same term works at the community level. In this case, it includes communication privacy, the right not to be monitored without consent.

Note some of the design tensions involved here:

  • Productivity vs. synergy. Both require citizen participation, based on skills and trust respectively, yet are also in tension as one invokes competition and the other cooperation. The former improves how citizen “parts” perform and the latter how they interact. Service by citizens reconciles productivity and synergy, as it invokes both.
  • Freedom vs. order. Both are in tension, as freedom has no class but order does. The social invention of democracy merges freedom and order by letting citizens chose their governance, not just of the President or Prime Minister, but for all positions. Democracy is rare online, but Slashdot uses it.
  • Ownership vs. openness. Both are in tension, as the right to keep out denies the right to go in. Fairness can reconcile public access and private control. Offline fairness is supported by justice systems but online fairness must be supported by code.
  • Transparency vs. privacy. Both are in natural tension, as giving the right to see another may deny their right not to be seen. Politeness reconciles this tension by letting people of different rank, education, age and experience connect. Further details are given in Chapter 4.

The social invention of rights can also reconcile many aspects of the web of social performance as discussed in Chapter 6. In the web of social performance, a community increases citizen competence to be productive, increases trust to get synergy, gives freedoms to adapt and innovate, establishes order to define responsibilities, allocates ownership to prevent property conflicts, is open to outside and inside talent, communicates internally to generate agreement, and grants legitimate rights that stabilize social interaction. Sexism and racism are community level losses. If women cannot work, half the population cannot add to its productivity. If a race, like black people, are excluded, so are their contributions. All these requirements together increase social performance, but they are in design tension, so each community must define its own social structure, whether online or off.

Next

 

3.4 Legitimacy Analysis

A social system is here an agreed form of social interaction that persists (Whitworth and de Moor, 2003), and people seeing themselves as a community is essentially their choice. Legitimacy can be defined as a combination of fairness and public good applied to a social system (Whitworth and de Moor, 2003). In politics, a legitimate government is seen as rightful by its citizens and is thus accepted, while illegitimate governments need to stay in power by force of arms and propaganda. By extension, legitimate interaction is accepted by the parties involved, who freely repeat it, e.g. fair trade. Physical and online citizens prefer legitimate communities because they perform better.

In physical society, legitimacy is maintained by laws, police and prisons that punish criminals. Legitimacy is the human level concept by which judges create new laws and juries decide on never before seen cases. A higher affects lower principle applies: communities engender human ideas like fairness, which generate informational laws that are used to govern physical interactions. Communities of people create rules to direct acts that benefit the community like outlawing theft, i.e. higher level goals drive lower level operations to improve system performance. Doing the same thing online, applying social principles to technical systems, is the basis of socio-technical design.

Conversely, over time lower levels get a “life of their own” and the tail starts to wag the dog, e.g. copyright laws that were originally designed to encourage innovators have become a tool to perpetuate the profit of the corporations that purchased those creations (Lessig, 1999), e.g. Disney copyrighted public domain stories like Snow White that they did not create solely in order to stop others using them. Unless continuously “re-invented” at the human level, information level laws inevitably decay and cease to work.

Lower levels like software and hardware are more obvious, so it is easy to forget that today’s online society is a social evolution as well as a technical one. The Internet is new technology but it is also a move to new social goals like service and freedom rather than control and profit. So for the Internet to become a hawker market, of web sites yelling to sell, would be a devolution. The old ways of business, politics and academia should follow the new Internet way, not the reverse.

There are no shortcuts in this social evolution, as one cannot just “stretch” physical laws into cyberspace (Samuelson, 2003), because these laws often:

1)   Do not transfer (Burk, 2001), e.g. what exactly is online “trespass”?

2)   Do not apply, e.g. what law applies to online “cookies” (Samuelson, 2003)?

3)   Change too slowly, e.g. laws change over years but code changes in months.

4)   Depend on code (Mitchell, 1995), e.g. online anonymity means actors cannot be identified.

5)   Have no jurisdiction. U.S. law applies to U.S. soil, but cyber-space is not “in” America.

Figure 3.2: Legitimacy analysis

The software that mediates online interaction has by definition full control of what happens, e.g. any application could upload any file on your computer hard drive to any server. In itself, code could create a perfect online police state, where everything is monitored, all “wrong” acts punished and all undesirables excluded, i.e. a perfect tyranny of code. Socio-technical design is the only way to ensure this does not happen.

Yet code is also an opportunity to be better than the law, based on legitimacy analysis (Figure 3.2). Physical justice, by its nature, operates after the fact, so one must commit a crime to be punished. Currently, with long court cases and appeals, justice can take years, and justice delayed is justice denied. In contrast, code represents the online social environment directly, as it acts right away. It can also be designed to enable social acts as well as to deny anti-social ones. If online code is law (lessig, 1999), to get legitimacy online we must build it into the system design, knowing that legitimate online systems perform better (Whitworth and de Moor, 2003). That technology can positively support social requirements like fairness is the radical core of socio-technical design.

So is every STS designer an application law-giver? Are we like Moses coming down from the mountain with tablets of code instead of stone? Not quite, as STS directives are to software not to people. Telling people to act rightly is the job of ethics not software, however “smart”. The job of right code, like right laws, is to attach outcomes to social acts, not to take over people’s life choices. Code as a social environment cannot be a social actor. Socio-technical design is socializing technology to offer fair choices, not technologizing society to be a machine with no choice at all. It is the higher directing the lower, not the reverse.

To achieve online what laws do offline, STS developers must re-invoke legitimacy for each application. It seems hard, but every citizen on jury service already does this when they interpret the “spirit of the law” for specific cases. STS design is the same but for application cases. That the result is not perfect doesn’t matter. Cultures differ but all have some laws and ethics, because some higher level influence is always better than none.

To try to build a community as an engineer builds a house is the error of choosing the wrong level for the job. Social engineering by physical coercion, oppressive laws, or propaganda and indoctrination is people using others as objects, i.e. anti-social. A community is by definition many people seeing themselves as one, so an elite few enslaving the rest is not a community. Social engineering treats people like bricks in a wall which denies social requirements like freedom and accountability. Communities cannot be “built” from citizen actors because to treat them as objects denies their humanity. Communities can emerge as people interact, but they cannot be “built”.

Next

 

3.3 The Socio-technical Gap

Figure 3.1: The socio-technical gap

Simple technical design gives a socio-technical gap (Figure 3.1) between what the technology allows and what people want (Ackerman, 2000). For example, email technology ignored the social requirement of privacy, letting anyone email anyone without permission, and so gave spam. The technical level response to this social level problem was inbox filters. They help on a local level, but transmitted spam as a system problem has never stopped growing. User inbox spam has been held constant due to filters, but the percentage of spam transmitted by the Internet commons has never stopped growing. It grew from 20% to 40% of messages in 2002-2003 (Weiss, 2003), to 60-70% in 2004 (Boutin, 2004), to 86.2% to 86.7% of the 342 billion emails sent in 2006 (MAAWG, 2006; MessageLabs, 2006), to 87.7% in 2009 and 89.1% of all emails sent in 2010 (MessageLabs, 2010). A 2004 prediction that within a decade over 95% of all emails transmitted by the Internet will be spam is coming true (Whitworth & Whitworth, 2004). Due to spam, email users are moving to other media, but if we make the same socio-technical design error there, the problem will just follow, e.g. SPIM is instant messaging spam.

Filters see spam as a user problem but it is really a community problem — a social dilemma. Transmitted spam uses Internet storage, processing and bandwidth, whether users hiding behind their filter walls see it or not. Only socio-technology can resolve social problems like spam because in the “spam wars” technology helps both sides, e.g. image spam can bypass text filters, AI can solve Captchas, botnets can harvest web site emails, and zombie sources can send emails. Spam is not going away any time soon (Whitworth and Liu, 2009a).

Right now, aliens observing our planet might think email is built for machines, as most of the messages transmitted go from one (spam) computer to another (filter) computer, untouched by human eye. This is not just due to bad luck. A communication technology is not a Pandora’s box whose contents are unknown until opened because we built it. Spam happens when we build technologies instead of socio-technologies.

Next

3.2 Social Requirements

Social ideas like freedom seem far removed from computer code but when computing is social, they are the difference between success and failure. That technology designers are not ready, have no precedent or do not understand social needs is irrelevant. Like a baby being born, online society is pushing forward, whether we are ready or not. And like new parents, socio-technical designers are causing it, again ready or not. As the World Wide Web’s creator observes:

“... technologists cannot simply leave the social and ethical questions to other people, because the technology directly affects these matters”  Berners-Lee, 2000: p124

One cannot design socio-technology in a social vacuum. Fortunately, while virtual society is new, people have been socializing for thousands of years. We know that fair communities prosper but corrupt ones do not (Eigen, 2003). Social inventions like laws, fairness, freedom, credit and contracts were bought with blood and tears (Mandelbaum, 2002), so why start anew online? Why reinvent the social wheel in cyber-space (Ridley, 2010)? Why re-learn electronically what we already know physically?

As the new bottle of information technology fills with the old wine of society, the stakes are raised. The information revolution increases our power to gather, store and distribute information, for good or ill (Johnson, 2001). Are we the hunter-gatherers of the information age (Meyrowitz, 1985) or an online civilization? A stone-age society with space-age technology is a bad mix, but what are the requirements for technology to support civilization? Computing cannot implement what it cannot specify.

We live in social environments every day, but struggle to specify them. Just as bird does not see the air or a fish water, we are social environment blind. e.g. a shop-keeper swipes a credit card with a reading device it is taken for granted that it was designed not to store credit card number or pin. It is designed to the social requirement that shopkeepers do not steal customer data, even though the machine is quite capable of doing this. Without this social choice, credit would collapse, and social disasters like the depression can be worse than natural disasters. Credit card readers support legitimate social interaction by design.

Trying to gather all the information you can is information greediness. Likewise, if online computer systems take and sell customer data such as home addresses and phone numbers for advantage, users will lose trust and either refuse to register at all, or register with fake data, like “123 MyStreet, MyTown, NJ” (Foreman & Whitworth, 2005). The way to satisfy online privacy is not to store data you do not need. To say it will never be revealed is not good enough, as companies can be forced by governments or bribed by cash to reveal data. One cannot be forced or bribed to give data one does not have. The best way to guarantee online privacy and trust is to not to store unneeded information in the first place.

Next

3.1 Designing Work Management

The term socio-technical was first introduced by the UK Tavistock Institute in the late 1950s to oppose Taylorism, the reducing of jobs to efficient elements on assembly lines in mills and factories (Porra & Hirschheim, 2007). Community level performance needs applied to the personal level gave work-place management ideas including:

1)   Congruence. Let the process match its objective — democratic results need democratic means.

2)   Minimize control. Give employees clear goals but let them decide how to achieve them.

3)   Local control. Let those with the problem change the system, not absent managers.

4)   Flexibility. Without “extra” skills to handle change, specialization will precede extinction.

5)   Boundary innovation. Innovate at the boundaries, where work goes between groups.

6)   Transparency. Give information first to those it affects, e.g. give work rates to workers.

7)   Evolution. Work system development is an iterative process that never stops.

8)   Lead by example. As the Chinese say: “If a General takes an egg, his soldiers will loot a village.” Note that while Steve Jobs worked for $1 per year, many other CEOs take as much as they can get because being in charge means they can.

9)   Support human needs. Work that lets people learn, choose, feel and belong gives loyal staff.

In computing it became a call for the ethical use of technology. This book extends that foundation, to apply social requirements to technology design as well as work design, because technologies now mediate social interactions.

During the industrial revolution, when the poet William Blake wrote of “dark satanic mills”, technology was seen by many as the heartless enemy of the human spirit. Yet people ran the factories that were enslaving people. The industrial revolution was the rich using the poor as they had always done, but machines let them do it better. Technology was the effect magnifier but not in itself good or evil. Today, we have largely rejected slavery but we embrace technology like the car or cell phone. Today technology is on the other side of the class war, as Twitter, Facebook and YouTube support the Arab spring. Yet the core socio-technical principle is still:

Just because you can, doesn’t mean you should

Socio-technical design puts social needs above technical wants. The argument is that human evolution involves social and technical progress in that order, e.g. today’s vehicles could not work on the road without today’s citizenry. Technology structures like cars also need social structures like road rules.

Social inventions like credit were as important in our evolution as technical inventions like cars. Global trade needs not only ships and aircraft to move goods around the world, but also people willing to do that. Today, online traders send millions of dollars to people they have not seen for goods they have not touched to arrive at times unspecified. A trader in the middle ages, or indeed in the early twentieth century, would have seen that as pure folly. What has changed is not just the technology but also the society. Today’s global markets work by social and technical support:

To participate in a market economy, to be willing to ship goods to distant destinations and to invest in projects that will come to fruition or pay dividends only in the future, requires confidence, the confidence that ownership is secure and payment dependable. … knowing that if the other reneges, the state will step in…”  Mandelbaum, 2002: p272.

Next

Chapter 2. References

Alberts, B., Bray, D., Lewis, J., Raff, M., Roberts, K., & Watson, J. D.. (1994). Molecular Biology of the Cell. 3rd Edition. New York: Garland Publishing Inc,

Alexander, C. (1964). Notes on the Synthesis of Form. Cambridge, Ma: Harvard University Press.

Alter, S., “A general, yet useful theory of information systems,” Communications of the AIS, 1, March, pp. 13–60.

Berners-Lee, T. (2000) Weaving The Web: The original design and ultimate destiny of the world wide web. New York: Harper-Collins.

Borenstein, N. S. & Thyberg, C.A. (1991). “Power, ease of use and cooperative work in a practical multimedia message system,” International Journal of Man-Machine Studies. 34, (2), pp. 229–259.

Campbell-Kelly, M. (2008). “Will the Future of Software be Open Source?,” Communications of the ACM. 51, (10). pp. 21–23.

Chung, L., Nixon, B. A., Yu, E., & Mylopoulos, J. (1999). Non-functional requirements in Software Engineering. Boston: Kluwer Academic.

Cysneiros, L. M., & Leite, J. (2002). Non-functional requirements: From elicitation to modeling languages. Paper presented at the International Conference on Software Engineering 2002, Orlando, Florida.

David, J. S., McCarthy, W.E., & Sommer, B. S. (2003). “Agility – The key to survival of the fittest,” Communications of the ACM, 46, (5) pp. 65–69.

De Simone, M., & Kazman, R. (1995). Software Architectural Analysis: An Experience Report. In Proceedings of CASCON’95, (pp. 251–261).Toronto, ON.

Esfeld, M. (1998). “Holism and Analytic Philosophy,” Mind, 107, pp. 365–380.

Gargaro, A., Rada, R., Moore, J., Carson, G.S., DeBlasi, J., Emery, D., Haynes, C,. Klensin, J., Montanez, I., & Spafford, E. (1993). “The Power of Standards.” Communications of the ACM, 36, (8), pp. 11–12.

Gediga, G., Hamborg, K. & Duntsc, I. (1999). “The IsoMetrics usability inventory: an operationalization of ISO9241-10 supporting summative and formative evaluation of software systems,” Behaviour & Information Technology. 18, (3), pp. 151–164.

Golden, L., & Figart, D. M.. (2000). Working Time: International Trends, Theory and Policy Perspectives. London and New York: Routledge.

Jonsson, E. (1998). An integrated framework for security and dependability. In Proceedings of New Security Paradigms Workshop (NSPW) (pp. 22-29). Charlottesville, VA, USA.

Keeney, R. L., &. Raiffa, H. (1976). Decisions with multiple objectives: Preferences and Value Tradeoffs. New York: Wiley.

Kienzle, D. M., & Wulf, W. A. (1997). A practical approach to security assessment. In Proceedings of New Security Paradigms Workshop (NSPW) (pp. 5–16). Langdale, Cumbria, U.K.

Knoll, K., & Jarvenpaa, S. L. (1994). Information technology alignment or ‘fit’ in highly turbulent environments: The concept of flexibility. In Proceedings of the 1994 ACM SIGCPR Conference, Alexandria, Virginia, USA.

Laprie, J. C., & Costes, A. (1982). Dependability: A unifying concept for reliable computing. In Proceedings of 12th International Symposium on Fault-Tolerant Computing, FTCS-12. (pp. 18–21).

Lindquist, C. (2005). “Fixing the requirements mess,” CIO, Nov 15, 2005. pp. 53–66.

Lorenz, E. N. (1963). “Deterministic nonperiodic flow,” Journal of the Atmospheric Sciences. 20, pp. 130–141.

Losavio, F., Chirinos, L., Mateo, A., Levy, N., & Ramdane-Cherif, A. (2004) “Designing quality architecture: incorporating ISO standards into the unified process,” Information Systems Management. 21, (1), pp. 27 – 44.

Moreira, A., Araujo, J., & Brita, I. (2002). Crosscutting quality attributes for requirements engineering. In Proceedings of the 14th International Conference on Software Engineering and Knowledge Engineering (SEKE). Ischia, Italy.

Nuseibeh, B., & Easterbrook, S. (2000). “Requirements Engineering: A roadmap.” In Finkelstein, A. (ed.), The Future of Software Engineering. Limerick, Ireland: ACM Press.

Organisation for Economic Cooperation and Development. (1996). Guidelines for the Security of Information Systems. OECD.

Pinto, J. K. (2002). “Project Management 2002.” Research Technology Management. 45, (2), pp. 22–37.

Regan, P. (1995). Legislating privacy, technology, social values and public policy. Chapel Hill, NC: University of North Carolina Press.

Rosa, N. S., Justo, G. R. R. & Cunha P. R. F. (2001). A Framework for Building Non-Functional Software Architectures. Paper presented at the 2001 Symposium on Applied Computing (SAC 2001), Las Vegas, NV.

Sanai, H. (1968). The Enclosed Garden of Truth. New York: Samuel Weiser.

Smith, H. A., Kulatilaka, N. & Venkatramen, N. (2002). “Developments in IS practice III: Riding the wave: extracting value from mobile technology,” Communications of Association of Information Systems, 8, pp. 467–481.

Sommerville, I. (2004) Software Engineering, 7th edition. Boston, MA: Addison-Wesley.

Tenner, E. (1997). Why Things Bite Back. New York: Vintage Books, Random House.

Toffler, A. (1980). The Third Wave. New York: Bantam Books.

Whitworth, B., Banuls, V., Sylla, C., & Mahinda, E. (2008). “Expanding the Criteria for Evaluating Socio-technical Software”. IEEE Transactions on Systems, Man and Cybernetics, Part A, 38, (4), pp. 777 – 790.

Chapter 4. Discussion Questions

Research questions from the list below and give your answer, with reasons and examples. If you are reading this chapter as part of a class – either at a university or in a commercial course – work in pairs then report back to the class.

1)   Describe three examples where software interacts as if it were a social agent. Cover cases where it asks questions, makes suggestions, seeks attention, reports problems, and offers choices.

2)   What is selfishness in ordinary human terms? What is selfishness in computer software terms? Give five examples of selfish software in order of the most annoying first. Explain why it is annoying.

3)   What is a social computing error? How does it differ from an HCI error, or a software error? Take one online situation and give examples of all three types of error. Compare the effects of each type of error.

4)   What is politeness in human terms? Why does it occur? What is polite computing? Why should it occur? List the ways it can help computing.

5)   What is the difference between politeness and legitimacy in a society? Illustrate by examples, first from physical society and then give an equivalent online version.

6)   Compare criminal, legitimate and polite social interactions with respect to the degree of choice given to the other party. Give offline and online examples for each case.

7)   Should any polite computing issues be left until all security issues are solved? Explain, with physical and online examples.

8)   What is a social agent? Give three common examples of people acting as social agents in physical society. Find similar cases online. Explain how the same expectations apply.

9)   Is politeness niceness? Do polite people always agree with others? From online discussion boards, quote people disagreeing politely and agreeing impolitely with another person.

10)    Explain the difference between politeness and etiquette. As different cultures are polite in different ways, e.g. shaking hands vs. bowing, how can politeness be a general design requirement? What does it mean to say that politeness must be “reinvented” for each application case?

11)    Define politeness in general information terms. By this definition, is it always polite to let the other party talk first in a conversation? Is it always polite to let them finish their sentence? If not, give examples. When, exactly, is it a bad idea for software to give users choices?

12)    For each of the five aspects of polite computing, give examples from your own experience of impolite computing. What was your reaction in each case?

13)    Find examples of impolite software installations computing. Analyze the choices the user has? Recommend improvements.

14)    List the background software processes running on your computer. Identify the ones where you know what they do and what application runs them. Do the same for your startup applications and system files. Ask three friends to do the same. How transparent is this system? Why might you want to disable a process, turn off a startup, or delete a system file? Should you be allowed to?

15)    Discuss the role of choice dependencies in system installations. Illustrate the problems of 1. Being forced to install what is not needed and 2. Being allowed to choose to not install what is.

16)    Find an example of a software update that caused a fault; e.g. update Windows only to find that your Skype microphone does not work. Whose fault is this? How can software avoid the user upset this causes? Hint: Consider things like an update undo, modularizing update changes and separating essential from optional updates.

17)    Give five online examples of software amnesia and five examples of software that remembers what you did last. Why is the latter better?

18)    Find examples of registration pryware that asks for data such as home address that it does not really need. If the pry fields are not optional, what happens if you add bogus data? What is the effect your willingness to register? Why might you register online after installing software?

19)    Give three examples of nagware – software on a timer that keeps interrupting to ask the same question. In each case, explain how you can turn it off? Give an example of when such nagging might be justified.

20)    Why is it in general not a good idea for applications to take charge? Illustrate with three famous examples where software took charge and got it wrong.

21)    Find three examples of “too clever” software that routinely causes problems for users. Recommend ways to design software to avoid this. Hint: consider asking first, a contextual turn off option and software that can be trained.

22)   What data drove Mr. Clippy’s Bayesian logic decisions? What data was left out, so that users found him rude? Why did Mr. Clippy not recognize rejection? Which users liked Mr. Clippy? Turn on the auto-correct in Word and try writing the equation: i = 1. Why does Word change it? How can you stop this, without turning off auto-correct? Find other examples of smart software taking charge.