The passage below is accompanied by a set of questions. Choose the best answer to each question.
As software improves, the people using it become less likely to sharpen their own know-how. Applications that offer lots of prompts and tips are often to blame; simpler, less solicitous programs push people harder to think, act and learn.
Ten years ago, information scientists at Utrecht University in the Netherlands had a group of people carry out complicated analytical and planning tasks using either rudimentary software that provided no assistance or sophisticated software that offered a great deal of aid. The researchers found that the people using the simple software developed better strategies, made fewer mistakes and developed a deeper aptitude for the work. The people using the more advanced software, meanwhile, would often “aimlessly click around” when confronted with a tricky problem. The supposedly helpful software actually short-circuited their thinking and learning.
[According to] philosopher Hubert Dreyfus . . . . our skills get sharper only through practice, when we use them regularly to overcome different sorts of difficult challenges. The goal of modern software, by contrast, is to ease our way through such challenges. Arduous, painstaking work is exactly what programmers are most eager to automate—after all, that is where the immediate efficiency gains tend to lie. In other words, a fundamental tension ripples between the interests of the people doing the automation and the interests of the people doing the work.
Nevertheless, automation’s scope continues to widen. With the rise of electronic health records, physicians increasingly rely on software templates to guide them through patient exams. The programs incorporate valuable checklists and alerts, but they also make medicine more routinized and formulaic—and distance doctors from their patients. . . . Harvard Medical School professor Beth Lown, in a 2012 journal article . . . warned that when doctors become“screen-driven,” following a computer’s prompts rather than “the patient’s narrative thread,” their thinking can become constricted. In the worst cases, they may miss important diagnostic signals. . . .
In a recent paper published in the journal Diagnosis, three medical researchers . . . examined the misdiagnosis of Thomas Eric Duncan, the first person to die of Ebola in the U.S., at Texas Health Presbyterian Hospital Dallas. They argue that the digital templates used by the hospital’s clinicians to record patient information probably helped to induce a kind of tunnel vision. “These highly constrained tools,” the researchers write, “are optimized for data capture but at the expense of sacrificing their utility for appropriate triage and diagnosis, leading users to miss the forest for the trees.” Medical software, they write, is no “replacement for basic history-taking, examination skills, and critical thinking.” . . .
There is an alternative. In “human-centred automation,” the talents of people take precedence. . . . In this model, software plays an essential but secondary role. It takes over routine functions that a human operator has already mastered, issues alerts when unexpected situations arise, provides fresh information that expands the operator’s perspective and counters the biases that often distort human thinking. The technology becomes the expert's partner, not the expert’s replacement.
In the context of the passage, all of the following can be considered examples of human-centered automation EXCEPT:
"There is an alternative. In “human-centred automation,” the talents of people takeprecedence. . . . In this model, software plays an essential but secondary role. It takes over routine functions that a human operator has already mastered, issues alerts when unexpected situations arise, provides fresh information that expands the operator’s perspective and counters the biases that often distort human thinking. The technology becomes the expert’s partner, not the expert’s replacement."
The above excerpt from the passage defines and applies the human-centred approach. This model should have humans as the primary mind, and the software's rule should be restricted only to assistance.
Option A: Since the role of the software is only specified to the feedback on the doctor's analysis, this is a perfect example of the hum-centred approach. Thus, this is not the correct option.
Option B: In this option, too, the role of technology is dependent on the instructions provided by the resident(human), and hence, it is not the correct option.
Option C: Since the software, in this case, operates on its own(auto-completion), it does not take account of human talent and thinking and hence, is not an example of human-centred automation. Thus, this is the correct option.
Option D: In this case, the software only works or provides assistance when the user requests, and hence, it is not the correct option.
Thus, the correct option is C.