Sign in
Please select an account to continue using cracku.in
↓ →
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
In [my book “Searches”], I chronicle how big technology companies have exploited human language for their gain. We let this happen, I argue, because we also benefit somewhat from using the products. It’s a dynamic that makes us complicit in big tech's accumulation of wealth and power: we’re both victims and beneficiaries. I describe this complicity, but I also enact it, through my own internet archives: my Google searches, my Amazon product reviews and, yes, my ChatGPT dialogues. . . .
People often describe chatbots’ textual output as “bland” or “generic” - the linguistic equivalent of a beige office building. OpenAI’s products are built to “sound like a colleague”, as OpenAI puts it, using language that, coming from a person, would sound “polite”, “empathetic”, “kind”, “rationally optimistic” and “engaging”, among other qualities. OpenAI describes these strategies as helping its products seem “professional” and “approachable”. This appears to be bound up with making us feel safe . . .
Trust is a challenge for artificial intelligence (AI) companies, partly because their products regularly produce falsehoods and reify sexist, racist, US-centric cultural norms. While the companies are working on these problems, they persist: OpenAI found that its latest systems generate errors at a higher rate than its previous system. In the book, I wrote about the inaccuracies and biases and also demonstrated them with the products. When I prompted Microsoft’s Bing Image Creator to produce a picture of engineers and space explorers, it gave me an entirely male cast of characters; when my father asked ChatGPT to edit his writing, it transmuted his perfectly correct Indian English into American English. Those weren’t flukes. Research suggests that both tendencies are widespread.
In my own ChatGPT dialogues, I wanted to enact how the product’s veneer of collegial neutrality could lull us into absorbing false or biased responses without much critical engagement. Over time, ChatGPT seemed to be guiding me to write a more positive book about big tech - including editing my description of OpenAI’s CEO, Sam Altman, to call him “a visionary and a pragmatist”. I'm not aware of research on whether ChatGPT tends to favor big tech, OpenAI or Altman, and I can only guess why it seemed that way in our conversation. OpenAI explicitly states that its products shouldn't attempt to influence users’ thinking. When I asked ChatGPT about some of the issues, it blamed biases in its training data - though I suspect my arguably leading questions played a role too. When I queried ChatGPT about its rhetoric, it responded: “The way I communicate is designed to foster trust and confidence in my responses, which can be both helpful and potentially misleading.”. . .
OpenAI has its own goals, of course. Among them, it emphasizes wanting to build AI that “benefits all of humanity”. But while the company is controlled by a non-profit with that mission, its funders still seek a return on their investment. That will presumably require getting people using products such as ChatGPT even more than they already are - a goal that is easier to accomplish if people see those products as trustworthy collaborators.
The author of the passage is least likely to agree with which one of the following claims?
Let's examine each option in relation to the passage.
Option A is the one the author is least likely to agree with. The author is concerned that AI’s neutral, friendly tone actually reduces critical thinking instead of encouraging it. She writes that the product’s “veneer of collegial neutrality could lull us into absorbing false or biased responses without much critical engagement.” This goes against the idea that neutrality helps critical thinking.
The author is cautious about option B but does not reject it. She says that ChatGPT “seemed to be guiding me to write a more positive book about big tech,” and even changed her description of Sam Altman. She also says she is “not aware of research” and can “only guess why it seemed that way.” This shows she is sceptical and careful, but she does not completely agree or disagree with the idea.
Option C matches the author’s view. Early on in the passage, she says that using big tech products makes users “complicit in big tech’s accumulation of wealth and power,” and calls us “both victims and beneficiaries.” This directly supports the claim; the author will definitely agree with this statement.
Option D also fits with the passage. The author says that funders “seek a return on their investment” and that it is easier to get people to use products like ChatGPT “if people see those products as trustworthy collaborators.” This connects AI’s neutral tone to business goals. The author will therefore agree with this statement as well.
So, the author is least likely to agree with option A.
Create a FREE account and get:
Educational materials for CAT preparation