Questions and Answers About Computers
Q: What is the fastest computer?
A: In 2001 the fastest computer in the world was IBM’s ASCI White computer at Lawrence Livermore National Laboratory. It has a peak processing speed slightly greater than 12 trillion operations per second, or 12 teraops. ASCI stands for the U.S. government’s Accelerated Strategic Computing Initiative, which is aimed at developing 100 teraop computers by the year 2004.
Q: How small can computer chips get?
A: In 1965 Gordon Moore, one of the founders of Intel Corporation, predicted that computer chip density (and thus computer processing power) would double every 18 months, and this prediction has proved remarkably accurate for the past 35 years. Chip designers can make chips smaller, or they can pack more electronic circuits onto the chips. Sometimes they do both. Both techniques are aimed at making chips faster and able to process more information in a shorter period of time.
Today, many of the most advanced chips are made with wires 0.18 micron wide. (A human hair is about 100 microns wide.) The next generation will be 0.15 micron wide. Most chips now contain millions of switches to process electronic impulses.
There have long been predictions that Moore’s Law, as Gordon Moore’s prediction is called, will end soon. That’s because the density of components and the extremely small width of the circuit wires on chips today are approaching the limit of what we can expect to get onto a silicon chip. Many experts feel that we’re likely to see the end of the silicon chip by the year 2012, perhaps earlier.
Because of this, there are many research projects going on now that are investigating different models of computer chips, including molecular chips, biological chips, and the processing of packets of light instead of electricity. It’s too early to tell which of these projects will be successful. But someday computers may become as small as molecules or bacteria.
Q: What are the two or three most common types of operating systems today?
A: The answer to this question depends on what kinds of computers you’re talking about. For personal computers, the leading operating system is some version of Microsoft Windows, such as Windows 95, Windows 98, Windows 2000, Windows Me, or Windows NT. Collectively, Microsoft’s family of operating systems reportedly runs about 95 percent of personal computers worldwide.
The second-place operating system for personal computers is Apple’s Macintosh OS, which runs between 3 and 4 percent of all personal computers.
In third place, according to StatMarket.com, is Microsoft’s WebTV service, which is not a PC operating system at all but can be considered a “personal” operating system in use by consumers at home. WebTV is a service and a set-top box that turns a television into a device for Web browsing and reading and sending e-mail. It is gradually being replaced by Microsoft’s new UltimateTV service, which adds new features to WebTV’s Internet access.
When you look at computers used as Internet servers—the computers that “serve” up Web pages, e-mail, graphics, and other files to Internet users—these statistics change. There are millions of server computers around the world, although far fewer than personal computers.
In the server market, about 60 percent of computers run on some version of the Unix operating system, including the free operating system known as Linux. A majority of Web servers, for example, use the free Web-hosting software program called Apache, which runs on Unix-based computers. In this market, Microsoft’s Windows NT or Windows 2000 are in second place.
Q: What are the most common computer-related health problems?
A: Computer-related health problems are almost never the sole product of the computer itself. Instead, they are usually caused by using a computer in a particular, unhealthy way, and most health problems can be avoided by altering computer-use habits.
For example, one health risk is repetitive stress injury (RSI), an array of different injuries usually to the hand, wrist, or forearm. These are often caused by accumulated stress related to the use of a keyboard and a mouse for long periods of time without a break. RSI injuries, if left untreated or allowed to worsen, can be very serious and debilitating. The worst version is called carpal tunnel syndrome, which can cause chronic pain in the wrist. In serious cases of carpal tunnel syndrome, the hand can become so weak that the person finds it difficult to hold a cup or a fork.
RSI injuries can be prevented by giving the hands and wrists a period of rest during the use of the computer and also by using special pads or supports that help reduce the strain. It’s also important that one’s posture in the chair is correct.
Other health risks related to computer use are problems such as blurry vision, headaches, or muscle aches in the neck and back. These can almost always be relieved by taking short breaks whenever you’re using the computer for a long period of time.
Q: How big was the biggest computer ever?
A: The answer to this question depends somewhat on how one defines a computer. Using the conventional definition of computer, meaning a single machine that processes the same computer program, the biggest computer was almost certainly the ENIAC, which was introduced to the public in 1946 at the University of Pennsylvania. (The name ENIAC is an acronym for electronic numerical integrator and computer.) ENIAC contained 18,000 vacuum tubes, 6,000 switches, 10,000 capacitors, 70,000 resistors, and 1,500 relays. It stood 3 m (10 ft) tall, occupied 200 sq m (1,800 sq ft), and weighed 30 tons. In other words, ENIAC covered the space of a modestly sized American house. A single computer chip in a modern personal computer has more than 60,000 times the computing power of ENIAC.
There’s another definition of a computer that would imply a different answer to the question. If we take into account the idea of distributed computing, which means that a single computer can be made up of many different and separate computers all linked together to accomplish some task, then we might view the entire global Internet as the largest computer ever. And it is getting bigger all the time. Some computer scientists are starting to think of the Internet in this way.
Q: Who invented the computer?
A: Unlike a lot of other inventions, there is no single individual responsible for the invention of the computer. A lot of people developed different ideas that contributed to the computer as we know it today.
Progress toward the development of a machine that could process information began in the 1830s, when British mathematician Charles Babbage made the first proposal for such a machine. About 100 years later, Alan Turing, another British mathematician and philosopher, developed the theories for a “universal machine” that could solve any mathematical equation. Turing’s pioneering work in the 1930s and during World War II (1939-1945) earned him the title “father of computer science.”
The world’s first large-scale, electronic digital computer was ENIAC. It was designed by Americans John W. Mauchly and John Presper Eckert, Jr., with help from another one of the founders of modern computer science, information theorist John von Neumann. Many other people have built on the fundamental work of these pioneers.
By the way, the word computer was first used during World War II to refer to U.S. and British servicewomen whose job it was to calculate the trajectories of large artillery shells. They used hand-cranked calculating machines to solve their equations. When electronic machines were developed to do this job, the term computer was transferred from the women to the machines.
Q: What is the most useful computer language to learn these days?
A: The answer to this question is debated endlessly by computer programmers and computer scientists. In general, the answer depends on what you want to do with computers and programming:
1. If you hope to work in the field of Web publishing or e-commerce, the combination of Java, Structured Query Language (SQL), and “middleware” programs such as Active Server Pages (ASP) seems like a good bet.
2. If you’re interested in developing computer applications for sale to PC customers, you might want to try C++ or its variations.
3. If you’re attracted by the claims of the open source software movement and want to use the Linux or BSD Unix operating systems, you’ll probably be combining C++ with open source coding languages such as Perl, PHP, mySQL, or Python.
4. If you’re just getting started and want to learn something about programming and build some basic skills, Microsoft’s Visual Basic for Windows and Real Software’s RealBasic for the Macintosh are excellent choices.
In general, there are two classes of languages that are useful today: object-oriented languages such as Java and C++, and database structure languages such as SQL. In the long run, learning the basics of these concepts will be the most valuable strategy.
Q: What are different computer languages used for?
A: There are many different languages used for programming computers. The reasons that programmers choose one instead of another are varied and complex. Probably the most common reason is that their employer or client—whoever has ordered the program—already uses a specific language and wants to keep new programs consistent with it. Or else the client has hired programmers to rewrite or fix software code already in a specific language.
Many programmers have a strong preference for a specific computer language because they feel comfortable using it or because they view it as the best one for the kinds of projects they work on. In the past, some computer languages were developed with specific purposes in mind. COBOL, for example, was developed for business programming; it is still widely used but is no longer considered a modern programming language. BASIC, also still widely used in various new forms, was designed to be easy to learn and use. Java, perhaps the fastest-growing programming language, was developed to run on any kind of computer (although it’s debatable whether this language has succeeded in achieving that goal).
Finally, there are some cases in which the computer’s operating system requires that programmers use a certain programming language or that they choose from among a limited range of options.
Q: How many people use computers on a regular basis?
A: We don’t have a completely accurate way to count the number of people who are using computers on a regular basis. We can only estimate the number—something experts try to do all the time. We base our estimates on different data, such as how many computers have been sold, or on surveys of computer users. There are companies that do surveys by calling people on the telephone or by asking them questions through the mail or on the Internet. Using these answers, we then build statistical models that estimate how many people use computers and what they use them for. But these are always estimates, and the numbers can vary quite a bit depending on the underlying data.
Our best guess today [2001] is that there over 300 million computers in use worldwide. In the United States, a little over half of households have a personal computer, and a little under half of these households access the Internet. There are many more computers in businesses, of course. There is a very uneven distribution of computers in the world. The United States has more personal computers than the rest of the world put together. But computer use is growing faster in areas such as Latin America and Asia than it is inside the United States. Most experts believe that by 2005, over 1 billion people will be using computers.
Q: Why do people make computer viruses?
A: This is a good question with no simple answer, or even a very clear one. The answer might be psychological, sociological, or ethical—we don’t really know. Probably every individual who writes a computer virus has his or her own reasons for doing so. Those reasons may include the hope for attention and possibly fame, malicious intent or revenge, boredom, or simply experimentation—because they have written a program that acts like a virus and they want to try it out.
We know that some computer viruses have been mistakes. For example, the famous 1989 Internet “worm" created by Cornell University graduate student Robert Morris was particularly nasty—and nearly brought down the Internet—because of a programming error that Morris mistakenly put into his computer code.
Computer viruses are unethical, and most of them are the product of immature but technically skilled programmers, whatever their intentions. People who can write computer programs should consider the ethics attached to what their programs do.
Q: When will wearable computers be available?
A: Wearable computers are available today, if you stretch your ideas about what a computer is. People are wearing, so to speak, portable MP3 music players, for example, which are nothing but digital computers dedicated to playing music. Millions of people also carry in their pockets a handheld computer such as a Pocket PC, a Palm Pilot, or a Handspring Visor. The capabilities of these devices are growing by leaps and bounds, too. In 2000 it became commonplace to see new handheld computers with wireless Internet connectivity.
If by wearable we mean computers that we wear as clothes, or computers embedded in our clothes, these are technologically feasible now but don’t seem to have much of a market yet. We’d have to figure out if we want computers in our clothes or draped on us, as opposed to being held in the hand. Perhaps the next step might be a computerized and Internet-connected wristwatch, and some companies have shown prototypes of these. But whether these kinds of products succeed has yet to be determined by consumers.
Q: Does Deep Blue count as artificial intelligence?
A: Deep Blue is the computer developed by IBM that became famous when it beat world chess master Garry Kasparov in a chess match in New York City in May 1997. Deep Blue is programmed with knowledge about how to play chess, but the machine’s real advantage is its ability to examine 200 million possible chesspiece positions per second. It therefore uses what some people call “brute strength processing” to examine all possible moves of a chesspiece, instead of the flashes of insight and deep experience a master player like Kasparov employs.
The answer to the question depends on what anyone believes “counts” as artificial intelligence. Many people, including Kasparov, were impressed with Deep Blue’s chess-playing ability—Kasparov remarked that it felt like he was playing against a super-smart human player. But chess is a game that is highly structured with rules and mathematical boundaries imposed by its playing board. Computers are very good—often better than human beings—at manipulating information when there are such rules and boundaries.
However, computers are not as good at handling information that is fluid and unbounded by rules, such as most information in human life. Scientists still don’t know how to program a computer to have common sense that is available even to small children. So, while Deep Blue’s chess-playing ability is valuable in understanding how computers can be smart in a particular way, we are still a long way from developing computers that can think the way humans do.
Q: Why do programmers put “Easter eggs” in programs?
A: Who knows! For fun, perhaps? Or as a cool kind of signature after they’ve finished their work? As a kind of digital Cracker Jack prize to be found by curious and diligent users?
The term Easter egg, when applied to computer programs, refers to a frivolous digital feature hidden somewhere inside a program that is usually revealed only when the user does something unusual. A user might access an Easter egg by hitting an odd combination of keys while looking at a specific window or dialog box, for example. The Easter egg may be a funny picture or a saying, a list of the names of the programmers, or even a slogan disparaging a competing product, which has been known to appear in a few commercial programs. Most software companies try to discourage their programmers from leaving Easter eggs, but they seem to slip through anyway.
Easter eggs seem to be a form of self-expression for programmers, not unlike engineers leaving their initials in the wet concrete of a new building.
Q: What is the ultimate goal of AI research?
A: There are probably as many opinions about the ultimate goal of artificial intelligence (AI) research as there are AI researchers. The answer to this question has been controversial for decades within the field of AI research.
There are those who believe—sometimes with great passion—that technologists will one day be able to build a computer with all the cognitive, memory, and emotional capabilities of the human brain. These people are sometimes called the “strong” AI proponents. A few of these “strong” advocates believe that computers will someday be more intelligent than human beings. It is common to hear such researchers say that this is the ultimate frontier of science.
On the other hand, there are other AI researchers who think that research into how human minds work can be useful in building better computer systems, regardless of whether we pursue a goal of full machine intelligence. In other words, these “weak” AI proponents believe that human cognition and its applicability to computers is an interesting research field in itself, and a field that may help make computers easier to use, more useful to people, and better at what computers are good at doing. The products of this research may not resemble human intelligence. Some “weak” AI proponents say that computers are obviously superior to human beings at some tasks, and it’s the job of AI research to figure out how to optimize those capabilities, instead of making computers more like people.
The ultimate answer to your question is that there is no single answer. Each researcher—indeed, each observer of the field of AI research—is likely to answer your question differently.
Q: How does a hacker hack?
A: The term hacker originally referred to a programmer who develops computer programs by hacking the code until it works, instead of using formal, structured programming techniques. Many computer programmers still use the word with this meaning. But the news media has used the term hacker to refer to someone who breaks into computer systems without authorization, and this use of the word has stuck with the public. (Computer experts often use the word cracker for people who break into computer systems.)
How do people break into, or hack into, computer systems? There are many ways. Probably the most common way is simply to try many different things until some method works, which is the connection to the old meaning of hacking. Hackers often try using pairs of account names and passwords until one combination grants them entry into the computer system. This is why it’s important to keep your account name and password secret and to change them every now and then.
Hackers are also known to share information with one another about how to break into systems. As computer security gets more sophisticated, hackers do too. This is an ongoing and probably permanent problem for a society dependent on computers and computer networks such as the Internet.
Q: Why are the hard drives of computers partitioned?
A: Computer hard drives can be partitioned, or divided up into separate zones or segments. Most computer operating systems—such as Microsoft Windows or Apple’s Macintosh operating systems—consider these separate hard drive partitions as if they were individual hard disks, even though they all exist on one hard drive. Computer users partition hard drives using software.
There are several benefits to hard drive partitioning. Reading from and writing to a hard drive partition can often be faster than using an unpartitioned drive, because the hard drive mechanism doesn’t have to cover the entire drive. This is especially useful on very large hard disks. The data on a specific hard drive partition can be locked, making it impossible to erase or write over, which can be useful for important files.
An increasingly common use of hard drive partitioning is to load separate and different operating systems on different partitions. The individual partitions can then be used as if they were different startup or boot drives, with each partition loading a different operating system. One partition might load Windows 2000, for example, while another might load the operating system Linux. Thus, a single computer might be reconfigured in a very different way for different tasks, depending on which partition was used as the boot drive when the computer was started.
Q: What is a minimal hardware configuration for e-business?
A: It is difficult to answer this question because the term e-business can cover so many different activities, from simply taking e-mail orders for a product or a service to full-blown e-commerce sites with online, secure ordering and high-volume sales.
If you were to put together hardware for the most basic and simple model of e-business, an ordinary consumer PC would be perfectly adequate. You’d need Web server software and probably some e-mail server software as well. For high-traffic sites it’s a good idea to use two separate machines, one for serving Web pages and another for e-mail; for low-traffic sites of, say, a few hundred visitors per day, Web and e-mail services can be run on the same computer. A typical mid-tier consumer PC of the current generation, with a Pentium processor and a sizable hard drive of several gigabytes, would be more than adequate. You’d want at least 128 megabytes of RAM, more if you can afford it (fortunately, RAM is cheap these days).
More important than the kind of computer you use is the speed, quality, and reliability of the connection you have to the Internet. For a reliable and useful network connection for online commerce, you’ll probably want at least a T1 line, a telephone company term for a dedicated data connection that runs at 1.5 million bits per second. Even faster connections would be better. You’ll definitely want a symmetric connection too, one whose download and upload speeds are the same. This rules out using a cable modem or most digital subscriber lines, which are typically asymmetric connections. Asymmetric networks are designed for Internet end users, not online businesses.
Finally, you’ll want and need a static IP address. This means that the number your computer is assigned on the Internet should not change. Your business domain name (as in microsoft.com) will need to be fixed to an unchanging number so your customers can find your site on the Web. Again, this rules out some kinds of Internet connections, such as cable modems, which do not typically provide static IP addresses.
An increasingly popular alternative for small e-businesses is Web hosting, or running your e-business site on another company’s computer, using its software and its network connection. You usually maintain your site by uploading files using your own Internet connection in your office or at home and logging into your site remotely. There are so many companies offering Web hosting services now that these services are amazingly inexpensive. You can probably develop an online business with many useful features for less than $100 per month. This is in contrast with the charge for a T1 line to your own office or home, which would start at about $500 per month and could be as much as $1,500 per month. And if you live in a rural area, you may not be able to get high-speed, symmetric Internet connections at any price. In this case, Web hosting is your only real choice for e-business.
Q: How is XML different from HTML?
A: HTML stands for Hypertext Markup Language, and it is used as the basic page description language for displaying Web pages in a Web browser, such as Microsoft Internet Explorer. HTML tells the browser application where to put text and images on a page, when to boldface text, when to italicize text, how to indicate hypertext links, and the destination of hypertext links, among other things. Most Web users are familiar with what HTML does, even if they are not familiar with what the actual HTML code looks like.
XML (which stands for Extensible Markup Language), on the other hand, is not a page description language but an object description language. Instead of telling a browser application where to put things on a page or how to display them, XML tells a browser what all the objects are, so the browser can understand something about their purpose and function.
For example, the HTML code on either side of a book title might tell the browser to put the title in italics, but it won’t tell the browser or the user anything about that text except that it is in italics. XML, however, can tell the browser that the title refers to a book, information that might be useful for processing, linking to other programs, and performing other techniques.
XML, in other words, is information about information, or what experts call meta-information. HTML does not give the user information about the data on a page; it simply describes where things go and how they work. Putting HTML and XML together creates a powerful combination that most experts view as the future of Web publishing.
Q: How does a light pen work?
A: A light pen is a device that looks like a pen and is held in the hand the same way as a common writing pen, except that instead of releasing ink it releases a highly focused beam of light. A typical use of a light pen allows the user to pass the pen over a screen that contains electronic components that are sensitive to light and turn on, or light up, when exposed to the light beam. This allows the user to draw on the screen using light, as if he or she were drawing with ink on a pad of paper.
Q: What’s the difference between a modem and DSL?
A: The word modem is an abbreviation for the term modulation/demodulation, which is a description of what a modem does.
Computers use digital data—consisting of ones and zeros—while most telephone wires require analog data (information that is represented by an electronic waveform). A modem is the device that translates between these two ways of representing data. It translates digital data into analog data by modulating the digital ones and zeroes into an analog waveform; it can also receive analog data and demodulate it into digital bits the computer can understand. This is why you usually need a modem to hook a computer to a telephone line.
DSL stands for digital subscriber line. This is a new technology and is not available everywhere yet, although its use is growing rapidly. As the name implies, DSL is a way for telephone lines to handle digital data directly, without needing the data to be converted to an analog waveform. This makes connections between a computer and the Internet up to a hundred times faster than modems allow.
People using DSL are required to use a separate box that DSL service providers call a DSL modem. These are not true modems, however, because they don’t do what modems do. The name seems to have carried over to the new technology simply because consumers are now used to the term.
Q: What’s the difference between RAM and ROM?
A: RAM stands for “random access memory,” and ROM stands for “read-only memory.” Both are computer chips found in personal computers and other computational devices, but they do different things.
RAM chips store digital data on a temporary basis, serving as a place for computer data to be stored while it is being processed by a computer program. The data in RAM chips can be frequently overwritten, or replaced, as one uses a computer program. In most computers, RAM chips lose their data when the computer is turned off or restarted.
ROM chips also store data but on a permanent basis—data on ROM chips cannot be overwritten by the user. These are used to store data that the computer is likely to use over and over again, such as basic instructions for starting the computer, testing the operating system, or configuring the keyboard. ROM chips retain their data when the computer is turned off or rebooted.
One way to think about this difference is that ROM chips are like a book that contains valuable information and cannot be changed, while RAM chips are like a blackboard that can be written on at will and changed many times.
Q: As a business person who uses e-mail as an important part of my business, I am constantly trying to clean up e-mail files by deleting them, only to find out at a later date that I still need them. Just last week I accidentally deleted five or six orders without realizing what I had done until afterward. Is there any way of retrieving these e-mails?
A: The answer to your question about deleting e-mail is more complicated than one might anticipate. Answers to most computer-related questions usually begin with “it depends,” and this one is no different. Of course, what is most likely to have happened is that you have indeed deleted those e-mail messages permanently and they’re gone forever, irretrievable.
But some programs, such as Microsoft Outlook Express and Eudora, put deleted messages into a folder or a mailbox that stores deleted items. Outlook Express puts deleted messages into a folder labeled Deleted Items, which you must deliberately empty for all of those items to be erased permanently. If you see items in your Deleted Items folder, they can be moved from there back into any other folder available within Outlook Express. Eudora works the same way, although it uses a Trash mailbox instead of a Deleted Items folder.
In Outlook Express, if you empty the Deleted Items folder, you’re prompted with a warning alert asking if you’re sure you want to delete the items in the folder. If you click yes, then those messages are permanently deleted and they cannot be retrieved. If you haven’t emptied that folder in a while, you might find messages in it that you thought you had deleted.
There is another way your messages might still be retrievable. If you have set up your e-mail client program to leave messages on the mail server, the messages you’ve deleted on your computer may still exist on the server. (System administrators don’t like you to set up your e-mail client program this way because it tends to fill up mail folders on the server, but it is an option in e-mail client programs.)
Finally, a thought prompted by your question: Many people these days have decided that deleting messages and sorting e-mail into folders is too time-consuming and they have simply given up. Instead, they save everything. E-mail doesn’t take up very much hard disk space or memory, and in any case both large hard drives and memory are cheap these days. If I find I have too many messages in my inbox, I just create a new folder with the name “Inbox” and the date, and then I dump everything into it. After that, I can search on and retrieve any of those messages, my new Inbox is clean and empty, and I haven’t wasted hours and hours reading, sorting, and deleting old e-mail messages. This works better for me than trying to keep a “tidy” e-mail system. Apparently this practice is now so widespread that it prompted a recent article in the New York Times (“You’ve Got Maelstrom,” by Robert Strauss, New York Times, July 5, 2001).
Q: How many Americans telecommute on a regular basis?
A: It’s hard to find a definitive answer to this question, mostly because the definition of telecommuting—working on a job from afar via telecommunications technologies—varies among studies. Also, research studies that attempt to determine the number of telecommuters have to use statistical sampling, because counting all the telecommuters in the United States at any given time is impossible. Because of this, there are many studies that report different numbers, some of them quite far apart.
However, having said this, it appears that the most widely accepted figure is from 1997, which pegged the number of Americans participating in some form of telecommuting at about 11 million. Growth rates derived from previous studies suggest that there will be about 15 million U.S. telecommuters within the next year or two. Again, these are rough estimates.
Q: I downloaded over 800 files from Napster on my laptop. My father, who owns a desktop, recently purchased an internal CD-writer. Is there a way I can transfer my Napster files onto his computer so I can write my own CDs?
A: There are lots of ways to transfer files from one computer to another. For home users, probably the most common way to transfer files is to use some kind of computer diskette, like a floppy diskette or a Zip disk. With 800 files, you probably need a diskette with lots of room, such as a Zip disk. If both of your computers have Zip drives, you can copy the files from your laptop to a Zip disk and then insert the disk into your father’s computer to open the files on his computer. You can then save those files to his computer. Or you could use one external Zip drive and move it between your two computers.
Another method is to actually connect the two computers together, which you can do if they have networking ports such as Ethernet ports. By connecting a cable called a crossover cable to each computer’s Ethernet port, you can connect the two computers and access one from the other as if the computers were on a local area network. Then you can copy the files from one computer to the other.
Finally, you can compress the files using a software utility such as Zip compression and then e-mail the compressed file to your father, as an attachment to an e-mail message. On his computer, you’ll need the same program to decompress the file and get the original files back to their usable format.
It’s worth noting that many Napster files are considered illegal copies by music recording companies who own the copyrights to the songs. Among 800 files downloaded through Napster, there are likely to be a lot of illegal copies. You may not be thinking of this, but it is unethical to keep or share illegal copies of computerized files. Do the right thing and get rid of your illegal copies.
Q: Is it wrong of me to use my work computer for personal tasks?
A: Most companies today have policies about how their computers can be used by employees, so you should certainly check to see if your company has such a policy and whether the policy answers your question.
If your company doesn’t have a policy about acceptable use of your work computer, you might try thinking about how your company views your personal use of your work telephone. Most companies allow a few brief personal calls on the phone—for example, arranging to pick up your children or talking to your spouse about where to meet for dinner. Certainly you are likely to be approved for using your work phone for family emergencies. Using these ideas as rules of thumb, you can probably use common sense about how you might use your computer as well.
One thing to remember is that e-mail on company computers should never be considered private. Companies can legally monitor employee e-mail—some, although not all, do so. Companies can also monitor the use of the World Wide Web by individual employees, and all computer monitoring can be done without alerting the employee. It’s therefore a good idea to figure out what is acceptable personal use of your work computer and to stick to those rules.