4.6 Politeness and Security

In current computing design, social requirements like politeness are considered to be “frills” that must take a back seat to critical requirements like security. Presumably when security is “solved” then, and only then, will designers get around to “non-critical” social requirements. In practice, this means they never will because the security war is endless, as every time security is upgraded hackers exploit another loophole, i a never-ending cycle. If history shows us anything, it is that there is no ingenious defense devised by humans that an ingenious attack devised by other humans cannot eventually circumvent. So security is an ongoing thing.

The premise that one requirement cannot be addressed until another is satisfied does not hold. If social requirements affect performance a they do, they must be addressed as well.Indeed they address the same problem as security does because they reduce people’s motivation to attack the community (Rose, Khoo, & Straub, 1999). Polite computing addresses a common cause of attacks — anger against a system that allows those in power to prey upon the weak (Power, 2000). Hacking is often revenge against a person, a company or the capitalist society in general (Forester & Morrison, 1994).

Politeness openly denies the view that “everyone takes what they can so I can too, and so diminishes the hacker ethic. A polite system can make those who are neutral polite and those who are against society neutral. Politeness and security are thus not alternatives, but two sides of the same coin of social health discussed in the next chapter. By analogy, a gardener defends his or her crops from weeds but does not wait until every weed dies before fertilizing. Politeness grows social health, and so it complements rather than competes with security.

Next

4.5 Politeness and Legitimacy

Legitimate interactions, defined as those that are both fair and in the common good, are the basis of civilized prosperity (Whitworth & deMoor, 2003) and legitimacy is a core demand of any prosperous and enduring community(Fukuyama, 1992). Societies that allow corruption and win-lose conflicts are among the poorest in the world (Transparency-International, 2001). Legitimate interactions offer fair choices to a parties involved while anti-social crimes like theft or murder give the victim little or no choice. Figure 4.1 categorizes anti-social, social and polite interactions by the degree of choice the other party has.

Principle

Unfairness

Legitimacy

Politeness

===================================>

More choice for the other party

Practice

Vengeance

Law

Etiquette

Figure 4.1: Social interactions by degree of choice given

So polite acts are more than fair, i.e. more than the law requires. To follow the law is not politeness because it is required. One does not thank a driver who stops at a red light, but one does thanks the driver who stops to let you into a line of traffic. Laws specify what citizens should do but politeness is what they could do. Politeness involves offering more choices in an interaction than the law requires, so it begins where fixed laws end. If criminal acts fall below the law, then polite acts rise above it (Figure 4.2). Politeness increases social health as criminality poisons it.

Figure 4.2: Politeness is doing more than the law requires

Next

4.4 Politeness and Social Performance

Politeness is considering the other in a social interaction, and its predicted effect is more pleasant interaction. In general, politeness makes a society a nicer place to be, whether online or offline. It contributes to computing by:

  • Increasing legitimate interactions.
  • Reducing anti-social interactions.
  • Increasing synergy.
  • More software use.

Programmers can fake politeness, as people do in the physical world, but when people behave politely, cognitive dissonance theory finds that people feel polite (Festinger, 1957). So if programmers design for politeness, the overall effect will be positive even although some programmers may be faking it.

Over thousands of years, as physical society became “civilized”, it created more prosperity. Today, for the first time in human history, many countries are producing more food than their people can eat, as their obesity epidemics testify. The bloody history of humanity has been a social evolution from zero-sum (win-lose) interactions like war to non-zero-sum (win-win) interactions like trade (Wright, 2001), with productivity the prize. Scientific research illustrates this: scientists freely giving their hardearned knowledge away seems foolish, but when a critical mass do it the results are astounding.

Social synergy is people in a community giving to each other to get more than is possible by selfish activity; e.g. Open Source Software (OSS) products like Linux now compete with commercial products like Office. The mathematics of synergy reflect its social interaction origin: competence gains increase linearly with group size but synergy gains increase geometrically, as they depend on the number of interactions. In the World Wide Web, we each only sow a small part of it but we reap from it the world’s knowledge. Without polite computing, however, synergy is not possible.

A study of reactions to a computerized Chinese word-guessing game found that when the software apologized after a wrong answer by saying “We are sorry that the clues were not helpful to you,” the game was rated more enjoyable than when the computer simply said “This is not correct(Tzeng, 2004). In general, politeness improves online social interactions and so increases them. Politeness is what makes a social environment a nice place to be. Businesses who wonder why more people don’t shop online should ask whether the World Wide Web is a place people want to be? If it is full of spam, spyware, viruses, hackers, pop-ups, nagware, identity theft, scams, spoofs, offensive pornography and worms then it will be a place to avoid. As software becomes more polite, people will participate with it more and avoid it less.

Next

 

4.3 Polite Software

Polite computing addresses the requirement for software to work politely with people. The Oxford English Dictionary defines politeness as:

                              “… behaviour that is respectful or considerate to others”.

So software that respects and considers users is polite. This feature is quite distinct from software usefulness or usability, where usefulness addresses functionality and usability how easy it is to use. Usefulness is what the computer does and usability is how users get it to do it. Polite computing in contrast is about social interactions, not computer power or cognitive ease. So software can be easy to use but rude, or polite but still hard to use. While usability reduces training and documentation costs, politeness lets a software agent socially interact with success. Both usability and politeness fall under the rubric of human-centered design.

Polite computing is about designing software to be polite, not making people polite. People are socialized by society, but rude, inconsiderate or selfish software is a widespread problem because it is a software design “blind spot” (Cooper, 1999). Most software is socially blind, except for socio-technical systems like Wikipedia, Facebook and E-Bay. This chapter outlines a vision of polite computing for the next generation of social software.

Next

4.2 Selfish Software

Selfishness

Selfish software is software that acts as if it were the only application on your computer, just as a selfish person acts as if only he or she exists. It pushes itself forward at every opportunity, loading at start-up and running continuously in the background. It feels free to interrupt you at any time to demand things or announce what it is doing, e.g. after I (the first author) installed new modem software, it then loaded itself on every start-up and regularly interrupted me with a modal window saying it was going online to check for updates to itself. It never found any, even after weeks. Finally, after yet another pointless “Searching for upgrades” message that I had to click “OK” to dismiss, I uninstalled it. Selfish apps going online to download upgrades without asking means that a tourist with a smartphone can find themselves with high data roaming bills for downloading data to update software they never even use.

People uninstalling software because it is impolite represents a new type of computing error – a social error. When a computer system gets into an infinite loop and hangs it is a software error. When a person cannot understand what to do on a screen, it is an HCI error. When software offends and drives users away, it is a social error. In HCI errors, people want to use the system but do not know how to, but in social errors they understand it all too well and choose to avoid it.

Socio-technical systems cannot afford social errors because in order to succeed they need people to participate. In practice, a web site that no-one visits is as much a failure as one that crashes. Whether a system fails because the computer cannot run it, the user does not know how to run it, or the user does not want to run it, does not matter. The end effect is the same – the application does not run.

For example, my 2006 computer came with McAfee Spamkiller which then overwrote my Outlook Express mail server account name and password with its own values when activated. I then no longer received email, as the mail server account details were wrong. After discovering this, I retyped in the correct values to fix the problem and got my mail again. However the next time I rebooted the computer, McAfee rewrote over my mail account details again. I called the McAfee help person, who explained that Spamkiller was protecting me by taking control and routing all my email through itself. To get my mail I had to go into McAfee and tell it my specific email account details, but when I did this it still did not work. I was now at war with this software, which:

  • Overwrote the email account details I had typed in.
  • Did nothing when my email didn’t work.

The software “took charge” but didn’t know what it was doing. Whenever Outlook started, it forcing me to watch it do a slow foreground modal check for email spam, but in two weeks of use it never found any! Not wanting to be held hostages by a computer program, I again uninstalled it as selfish software.

Next

 

4.1 Can Machines be Polite?

Can Machines be Polite?

Software, with its ability to make choices, has crossed the border between inert machine and active participant, as the term human-computer interaction (HCI) implies. Computers today are no longer just tools that respond passively to directions but social agents that are online participants in their own right. Miller notes that if I hit my thumb with a hammer I blame myself not the hammer, but when people make mistakes with software they often blame the equally mechanical program behind it. (Miller, 2004, p. 31).

Computer programs are just as mechanical as cars, as each state defines the next, yet programs now ask questions, suggest actions and give advice. Software mediating a social interaction, like email, is like a social facilitator. As computing evolves, people increasingly see programs as active collaborators rather than passive media. These new social roles, of agent, assistant or facilitator, imply a new requirementto be polite.

To treat machines as people seems foolish, like talking to an empty car, but words addressed to cars on the road are actually to their drivers. Cars are machines but the drivers are people. Likewise, a program is mechanical but people “drive” the programs we interact with. It is not surprising that people show significantly more relational behavior when the other party in computer mediated communication is clearly human than when it is not (Shectman & Horowitz, 2003). Studies find that people do not treat computers as people outside the mediation context (Goldstein, Alsio, & Werdenhoff, 2002) – just as people do not usually talk to empty cars. Treating a software installation program as if it were a person is not unreasonable if the program has a human source. Social questions like: “Do I trust you?” and “What is your attitude to me?” apply. If computers have achieved the status of social agents, it is natural for people to treat them socially.

A social agent is a social entity that represents another social entity in a social interaction. The other social entity can be a person or group, e.g. installation programs interact with customers on behalf of a company (a social entity). The interaction is social even if the agent, a program, is not, because an install is a social contract. Software is not social in itself, but to mediate a social interaction it must operate socially. If a software agent works for the party it interacts with, it is an assistant, both working for and to the same person. A human-computer interaction with an assistant also requires politeness. If software mediates social interactions it should be designed accordingly. No company would send a socially ignorant person to talk to important clients, yet they send software that interrupts, overwrites, nags, steals, hijacks and in general annoys and offends people (Cooper, 1999). Polite computing is the design of socially competent software.

Next

 

4. Polite Computing

This chapter considers politeness to be a software design requirement because politeness is what makes a community a nice place to be.    Next

4.1 Can Machines be Polite?

4.2 Selfish Software

4.3 Polite Software   

4.4 Politeness and Social Performance   

4.5 Politeness and Legitimacy   

4.6 Politeness and Security   

4.7 Politeness and Etiquette   

4.8 A Definition of Polite Computing   

4.9 Polite Computing Requirements   

4.10 Impolite Computing

4.11 The Sorcerer’s Apprentice

4.12 Politeness in a Software Democracy

4.13 Politeness – A New Software Requirement

Chapter 4. Discussion Questions

Chapter 4. References   

Chapter 3. References

Ackerman, M. S. (2000). “The intellectual challenge of CSCW: The gap between social requirements and technical feasibility,” Human Computer Interaction, 15, pp. 179–203.

Beer, D., & Burrows, R. (2007) “Sociology and, of and in Web 2.0: Some Initial Considerations.” Sociological Research Online, 12(5).

Berners-Lee, T. (2000). Weaving The Web: The original design and ultimate destiny of the world wide web. New York: Harper-Collins.

Boutin, P. (2004) “Can e-mail be saved?,Infoworld, 14, April 19, pp. 40–53.

Burk, D. L. (2001). “Copyrightable functions and patentable speech.” Communications of the ACM, 44, (2), pp. 69–75.

Callahan, D. (2004). The Cheating Culture. Orlando: Harcourt.

Eigen, P. (2003) Transparency International Corruption Perceptions Index 2003, Transparency International.

Foreman, B., & Whitworth, B. (2005). Information Disclosure and the Online Customer Relationship. In Proceedings of the Quality, Values and Choice Workshop, Computer Human Interaction, 2005, (pp. 1–7). Portland, Oregon.

Hoffman, L. R., & Maier, N. R. F. (1961). “Quality and acceptance of problem solutions by members of homogenous and heterogenous groups.” Journal of Abnormal and Social Psychology, 62, pp. 401–407.

Johnson, D. G. (2001). Computer Ethics. Upper Saddle River, New Jersey: Prentice-Hall.

Lessig, L. (1999). Code and other laws of cyberspace. New York: Basic Books.

Messaging Anti-Abuse Working Group (MAAWG). (2006). Email Metrics Program, First Quarter 2006 Report, retrieved April 1, 2007. For 2012.

Mandelbaum, M. (2002). The Ideas That Conquered the World. New York: Public Affairs.

MessageLabs, (2006). The year spam raised its game; 2007 predictions, Access Date-2006, 2006.

MessageLabs, (2010) Intelligence Annual Security Report, 2010, Access Date 2010.

Meyrowitz, J. (1985). No Sense of Place: The impact of electronic media on social behavior. New York: Oxford University Press.

Mitchell, W. J. (1995). City of Bits Space, Place and the Infobahn. Cambridge, MA: MIT Press.

Porra, J., & Hirscheim, R. (2007). “A lifetime of theory and action on the ethical use of computers. A dialogue with Enid Mumford,” JAIS, 8, (9), pp. 467–478.

Poundstone, W. (1992). Prisoner’s Dilemma. New York: Doubleday, Anchor.

Reid, F. J. M., Malinek, V., Stott, C. J. T & Evans J. S. B. T. (1996). “The messaging threshold in computer-mediated communication,” Ergonomics, 39, (8) pp. 1017–1037.

Ridley, M. (2010). The Rational Optimist: How Prosperity Evolves. New York: Harper.

Robert, H. M. (1993). The New Robert’s Rules of Order. New York: Barnes & Noble.

Samuelson, P. (2003). “Unsolicited Communications as Trespass.” Communications of the ACM, 46, (10) pp. 15–20.

Shirky, C. (2008). Here Comes Everybody: The Power of Organizing Without Organizations. London: Penguin.

Short, J., Williams, E., & Christie, B. (1976) “Visual communication and social interaction – The role of ‘medium’ in the communication process.” In J.A. Short, E. Williams, & B. Christie (Eds), The Social Psychology of Telecommunications, (pp. 43-60). New York: Wiley.

Spence, R. & Apperley, M. (2011): “Bifocal Display.” In Mads Soegaard & RikkeFriis Dam (Eds.). Encyclopedia of Human-Computer Interaction. Aarhus, Denmark: The Interaction-Design.org Foundation. http://www.interaction-design.org/encyclopedia/bifocal_display.html

Weiss, A. (2003). “Ending spam’s free ride,” netWorker, 7 (2), pp. 18–24.

Whitworth, B., Gallupe, R. B. & McQueen R. (2000). “A cognitive three-process model of computer-mediated group interaction.” Group Decision and Negotiation 9(5):pp. 431-456.

Whitworth, B., Van de Walle, B., & Turoff, M. (2000). Beyond rational decision making. In proceedings of Group Decision and Negotiation 2000 Conference, (pp. 1-13), Glasgow, Scotland.

Whitworth, B., Gallupe, B., & McQueen, R. (2001). “Generating agreement in computer-mediated groups.” Small Group Research, 32, (5), pp. 621–661.

Whitworth, B., & deMoor, A. (2003). “Legitimate by design: Towards trusted virtual community environments.” Behaviour & Information Technology, 22 (1), pp. 31–51.

Whitworth, B. &. Whitworth, E. (2004). “Spam and the social-technical gap” IEEE Computer, 37 (10), pp. 38–45.

Whitworth, B. & Liu, T. (2009). “Channel email: Evaluating social communication efficiency,” IEEE Computer, 42 (7), pp. 63-72.

Whitworth, B. & Whitworth, A. (2010). “The Social Environment Model: Small Heroes and the Evolution of Human Society,” First Monday, 15 (11).

Wright, R. (2001). Nonzero: The logic of human destiny. New York: Vintage Books.

Chapter 3. Discussion Questions

 Research questions from the list below and give your answer, with reasons and examples. If you are reading this chapter as part of a class – either at a university or in a commercial course – work in pairs then report back to the class.

1)   Why can technologists not leave the social and ethical questions to non-technologists? Give examples of IT both helping and hurting humanity. What will decide, in the end, whether IT helps or hurts us overall?

2)   Compare central vs. distributed networks (Ethernet vs. Polling). Compare the advantages and disadvantages of centralizing vs. distributing control. Is central control ever better? Now consider social systems. Of the traditional socio-technical principles listed, which ones distribute work-place control? Compare the advantages and disadvantages of centralizing vs. distributing control in a social system. Compare governance by a dictator tyrant, a benevolent dictator and a democracy. Which type are most online communities? How might that change?

3)   Originally, socio-technical ideas applied social requirements to work-place management. How has it evolved today? Why is it important to apply social requirements to IT design? Give examples.

4)   Illustrate system designs that apply: mechanical requirements to hardware; informational requirements to hardware; informational requirements to software; personal requirements to hardware; personal requirements to software; personal requirements to people; community requirements to hardware; community requirements to software; community requirements to people; community requirements to communities. Give an example in each case. Why not just design software to hardware requirements?

5)   Is technology the sole basis of modern prosperity? If people suddenly stopped trusting each other, would wealth continue? Use the 2009 credit meltdown to illustrate your answer. Can technology solve social problems like mistrust? How can social problems be solved? How can technology help?

6)   Should an online system gather all the data it can during registration? Give two good reasons not to gather or store non-essential personal data. Evaluate three online registration examples.

7)   Spam demonstrates a socio-technical gap, between what people want and what technology does. How do users respond to it? In the “spam wars”, who wins? Who loses? Give three other examples of a socio-technical gap. Of the twenty most popular third-party software downloads, which relate to a socio-technical gap?

8)   What is a legitimate government? What is a legitimate interaction? How do people react to an illegitimate government or interaction? How are legitimacy requirements met in physical society? Why will this not work online? What will work?

9)   What is the problem with “social engineering”? How about “mental engineering” (brainwashing)? Why do these terms have negative connotations? Is education brainwashing? Why not? Explain the implications for STS design.

10)   For a well known STS, explain how it supports, or not, the eight proposed aspects of community performance, with screenshot examples. If it does not support an aspect, suggest why. How could it?

11)   Can we own something but still let others use it? Can a community be both free and ordered? Can people compete and cooperate at the same time? Give physical and online examples. How are such tensions resolved? How does democracy reconcile freedom and order? Give examples in politics, business and online.

12)   What is community openness for a nation? For an organization? For a club or group? Online? Why are organizations that promote people based on merit more open? Illustrate technology support for merit-based promotion in an online community.

13)   Is a person sending money to a personal friend online entitled to keep it private? What if the sender is a public servant? What if it is public money? Is a person receiving money from a personal friend online entitled to keep it private? What if the receiver is a public servant?

14)   What is communication? What is meaning? What is communication performance? How can media richness be classified? Is a message itself rich? Does video always convey more meaning than text? Can rich media deliver more communication performance? Give online and offline examples.

15)   What affects communication performance besides richness? How is it classified? Is it a message property? How does it communicate more? Give online/offline examples.

16)   If media richness and linkage both increase communication power, why not have both? Describe a physical world situation that does this? What is the main restriction? Can online media do this? What is, currently, the main contribution of computing to communication power? Give examples.

17)   What communication media type best suits these goals: telling everyone about your new product; relating to friends; getting group agreement? Give online and offline examples. For each goal, what media richness, linkage and anonymity do you recommend. You lead an agile programming team spread across the world: what communication technology would you use?

18)   State differences between the following media pairs: email and chat; instant messaging and texting; telephone and email; chat and face-to-face conversation; podcast and video; DVD and TV movie; wiki and bulletin board. Do another pair of your choice.

19)   How can a physical message convey content, state and position semantic streams? Give examples of communications that convey: content and state; content and position; state and position; and content, state and position. Give examples of people trying to add an ignored semantic stream to technical communication, e.g. people introducing sender state data into lean text media like email.

20)   Can a physical message generate many information streams? Can an information stream generate many semantic streams? Give examples. Does the same apply online? Use the way in which astronomical or earthquake data is shared online to illustrate your answer.

21)   You want to buy a new cell-phone and an expert web review suggests model A based on factors like cost and performance. Your friend recommends B, uses it every day, and finds it great. On an online customer feedback site, some people report problems with A and B, but most users of C like it. What are the pluses and minuses of each influence? Which advice would you probably follow? Ask three friends what they would do.

22)   What is the best linkage to send a message to many others online? What is the best linkage to make or keep friends online? What is the best linkage to keep up with community trends online? List the advantages and disadvantages of each style. How can technology support each of the above?

23)   Explain why reputation ratings, social bookmarks and tagging are all matrix communication. In each case, describe the senders, the message, and the receivers. What is the social goal of matrix communication? How exactly does technology support it?

24)   Give three online leaders searched by Google or followed on Twitter. Why do people follow leaders? How can leaders get people to follow them? How does technology help? If the people are already following a set of leaders, how can new leaders arise? If people are currently following a set of ideas, how can new ideas arise? Describe the innovation adoption model. Explain how it applies to “viral” videos?

 

3.8 Three cognitive processes

Communication goals can be classified by level as follows:

1) Informational. The goal is to analyze information about the world and decide a best choice, but this rational analysis process is surprisingly fragile (Whitworth et al., 2000).

2) Personal. The goal is to form reliable interpersonal relationships. Relating involves a turn-taking, mutual-approach process, to manage the emotional arousal evoked by the presence of others (Short et al., 1976). Friends are better than information.

3) Community. The goal is to stay part of the group. Belonging to a community means being part of it, and so protected by it. Communities outlast friends.

The Maori Haka illustrates communication used when two warbands meet that both binds the group and sends a message to the other group that they are not to be trifled with.

Goal

Influence

Linkage

Questions

Analyze task information

Informational influence

Broadcast

What is right?What is best?

Relate to other people

Personal influence

Interpersonal

Who do I like?Who do I trust?

Belong to a community

Normative influence

Matrix

What are the others doing? Am I “in” the group?

Table 3.2: Communication goals and influence by preferred linkage

Table 3.2 shows how the level goals map to influence and linkage types. People online or off analyze information, relate to others and belong to communities, so are subject to informational, personal and normative influences. Normative influences are based on neither logic nor friendship; an example is patriotism, loyalty to your country whether its actions are right or wrong. Even brothers may kill each other on a civil war battlefield.

People are influenced by community norms, friend views and task information, in that order, via different semantic streams. Semantic streams arise as people process a physical signal in different ways to generate different meanings. So one physical message can at the same time convey:

1) Message content. Symbolic statements about the world, e.g. a factual sentence, give informational influence.

2) Sender state. Sender psychological state, e.g. an agitated tone of voice, gives relational influence.

3) Group position. Sender intent merged over many senders to get group intent, e.g. a vote, gives normative influence.

Human communication is subtle because one message can have many meanings and people respond to them all at once. So when leaving a party I may say: “I had a good time” but by tone imply the opposite. I can say “I AM NOT ANGRY!” in an angry voice. Note that some people do not process the sender state semantic stream, e.g. those with autism.What is less obvious is that a message can also indicate a position, or intent to act, e.g. to say “Thanks, I had a good time” in a certain tone or with certain body language can indicate an intention to leave a party.

What makes human communication complex is that a single physical signal can have as many semantic streams as the medium allows. Face-to-face talk allows three streams, but computer communication is usually more restricted, e.g. email text mainly just gives content information.

Community level systems using matrix communication include:

1) The reputation ratings of Amazon or E-Bay give community-based product quality control. Slashdot does the same for content, letting readers rate comments to filter out poor ones.

2) Social bookmarks, like Digg and Stumbleupon, let users share links, to see what the community is looking at.

3) Tag technologies increase the font size of links according to frequency of use. As people walk in a forest walk on tracks trod by others, so we can now follow web-tracks on browser screens.

4) Twitter’s follow function lets leaders broadcast ideas to followers and people choose the leaders they like.

The power of the computer can allow very fast matrix communication by millions or billions. What might a global referendum on current issues reveal? The Internet could tell us.

How these three basic cognitive processes, of belonging, relating and fact analysis, work together in the same person is shown in Figure 3.7.

Figure 3.7: Cognitive processes in communication.

The details are given elsewhere (Whitworth et al., 2000), but note that the rational resolution of task information is the third priority cognitive process, not the first. Only if all else fails do we stop to think things out or read instructions. Figure 3.8 shows how the three cognitive processes of belonging, relating and analyzing task information take priority in that order for most people. The Three Process Model predicts that people mostly make decisions based on relating to others as Facebook offer and following the majority as Google and Twitter enable.

Figure 3.8: The cognitive processing priority

In conclusion:

The first level of the World Wide Web, an information library accessed by search tools, is well in place.

The second level of the World Wide Web, a medium for personal relations, is also well under way.

The third level of the World Wide Web however, a civilized social environment, is the current and future challenge, as even a cursory study of Robert’s Rules of Order will dispel any illusion that social dealings are simple (Robert, 1993).

Socio-technology lets hundreds of millions of people act together, but we still do not know what Here Comes Everybody means (Shirky, 2008). The issue of the power of communities is illustrated as follows:

Question: Where does an 800lb Gorilla sit when it comes to dinner?

Answer: Anywhere it wants to. Communities are like this but first they must agree to act as one, which can take years. How exactly many people decide to act as one has always been a bit of a mystery, yet this is what we must learn to bring group applications to an Internet full of personal applications. Group writing and publishing, group browsing, group singing and music (online choirs and bands), group programming and group research illustrate the many areas of current and future potential in online group activity. The future of the Internet is in more private online communities that are hard to get into.

Next