Sign in
Please select an account to continue using cracku.in
↓ →
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
In [my book βSearchesβ], I chronicle how big technology companies have exploited human language for their gain. We let this happen, I argue, because we also benefit somewhat from using the products. Itβs a dynamic that makes us complicit in big tech's accumulation of wealth and power: weβre both victims and beneficiaries. I describe this complicity, but I also enact it, through my own internet archives: my Google searches, my Amazon product reviews and, yes, my ChatGPT dialogues. . . .
People often describe chatbotsβ textual output as βblandβ or βgenericβ - the linguistic equivalent of a beige office building. OpenAIβs products are built to βsound like a colleagueβ, as OpenAI puts it,Β using language that, coming from a person, would sound βpoliteβ, βempatheticβ, βkindβ, βrationally optimisticβ and βengagingβ, among other qualities. OpenAI describes these strategies as helping its products seem βprofessionalβ and βapproachableβ. This appears to be bound up with making us feel safe . . .
Trust is a challenge for artificial intelligence (AI) companies, partly because their products regularly produce falsehoods and reify sexist, racist, US-centric cultural norms. While the companies are working on these problems, they persist: OpenAI found that its latest systems generate errors at a higher rate than its previous system. In the book, I wrote about the inaccuracies and biases and also demonstrated them with the products. When I prompted Microsoftβs Bing Image Creator to produce a picture of engineers and space explorers, it gave me an entirely male cast of characters; when my father asked ChatGPT to edit his writing, it transmuted his perfectly correct Indian English into American English. Those werenβt flukes. Research suggests that both tendencies are widespread.
In my own ChatGPT dialogues, I wanted to enact how the productβs veneer of collegial neutrality could lull us into absorbing false or biased responses without much critical engagement. Over time, ChatGPT seemed to be guiding me to write a more positive book about big tech - including editing my description of OpenAIβs CEO, Sam Altman, to call him βa visionary and a pragmatistβ. I'm not aware of research on whether ChatGPT tends to favor big tech, OpenAI or Altman, and I can only guess why it seemed that way in our conversation. OpenAI explicitly states that its products shouldn't attempt to influence usersβ thinking. When I asked ChatGPT about some of the issues, it blamed biases in its training data - though I suspect my arguably leading questions played a role too. When I queried ChatGPT about its rhetoric, it responded: βThe way I communicate is designed to foster trust and confidence in my responses, which can be both helpful and potentially misleading.β. . .
OpenAI has its own goals, of course. Among them, it emphasizes wanting to build AI that βbenefits all of humanityβ. But while the company is controlled by a non-profit with that mission, its funders still seek a return on their investment. That will presumably require getting people using products such as ChatGPT even more than they already are - a goal that is easier to accomplish if people see those products as trustworthy collaborators.
The author of the passage is least likely to agree with which one of the following claims?
Let's examine each option in relation to the passage.
Option A is the one the author is least likely to agree with. The author is concerned that AIβs neutral, friendly tone actually reduces critical thinking instead of encouraging it. She writes that the productβs βveneer of collegial neutrality could lull us into absorbing false or biased responses without much critical engagement.β This goes against the idea that neutrality helps critical thinking.
The author is cautious about option B but does not reject it. She says that ChatGPT βseemed to be guiding me to write a more positive book about big tech,β and even changed her description of Sam Altman. She also says she is βnot aware of researchβ and can βonly guess why it seemed that way.β This shows she is sceptical and careful, but she does not completely agree or disagree with the idea.
Option C matches the authorβs view. Early on in the passage, she says that using big tech products makes users βcomplicit in big techβs accumulation of wealth and power,β and calls us βboth victims and beneficiaries.β This directly supports the claim; the author will definitely agree with this statement.Β
Option D also fits with the passage. The author says that funders βseek a return on their investmentβ and that it is easier to get people to use products like ChatGPT βif people see those products as trustworthy collaborators.β This connects AIβs neutral tone to business goals. The author will therefore agree with this statement as well.Β
So, the author is least likely to agree with option A.
Click on the Email βοΈ to Watch the Video Solution
Create a FREE account and get:
Educational materials for CAT preparation
Ask our AI anything
AI can make mistakes. Please verify important information.
AI can make mistakes. Please verify important information.