I am having trouble with cannot be true questions! I incorrectly chose (B) for this question, and I want to make sure I know why it's wrong and (A) is correct.
(B) is incorrect because it is not violated by the stimulus. Since they were able to group the responses into a manageable number of categories, this principle was not violated.
On the other hand, (A) is correct because the survey asked respondents to rank the quality of brands that they were not familiar with. Therefore, the survey did ask a question to all respondents that could not be reasonably answered.
Is my reasoning correct?
Thanks so much for your help!
#22 - In a survey of consumers in an Eastern European nation
5 posts • Page 1 of 1
If you are having trouble with "cannot be true" questions, feel free to think of them as "must be false" questions. Or you can attack the question by eliminating the answer choices that could be true or must be true.
Specific to this question:
Your reasoning is exactly right. To be even more explicit, the survey asked all respondents about their approval of brands, a question which those who had no or little recognition of particular brands could not reasonably answer.
I think it's important to be similarly specific when evaluating an answer choice. When you evaluate (A) so specifically, the match between general principle and specific application becomes easier to see. Then you don't need to necessarily consider why the others are wrong because the correct answer choice will be clearer.
Thanks so much, Claire! I think eliminating the could/must be true answers is the best for me, so I'll keep working on that.
Can you help me understand a bit more clearly why (A) is correct and (B) is not?
I understood the stimulus to basically be saying that most of the respondents recognized a minority of the brands, and then rated them differently. For example, they might recognize 10 of the 27, and of those 10, they love 5 and hate the other 5. So they rate the top 5 1-5 and the bottom 5 22-27, and through the other 17 randomly in the middle both in recognition and approval.
I sort of understand the correctness of A; there were a bunch of responses about approval that could not be reasonably answered because no one recognized them, either. But that doesn't seems to be really about questions that cannot be reasonably answered by "respondents who make a *particular* response to another question in the same survey." It's about random responses, not particular responses.
B, however, reflects the fact that most of the respondents' answers were random garbage: a "large variety of responses that are difficult to group into a manageable number of categories." Granted the categories are pretty easily defined, since they are forced ranked, but they don't seem particularly manageable given that they're not much more than random answers.
Where am I going wrong?
As to answer (A): The first question in this survey can be answered by everyone. It is simply asking whether or not they recognize a particular brand. Everyone can answer either yes or no. As for the second question, anyone who answered "yes" to the first question can probably reasonably answer that follow up, but those who answered "no" to the first question cannot reasonably answer the second.
So to parse out answer (A):
Never ask all respondents a question (the second question here about quality) if it cannot be reasonably answered by respondents who make a particular response ("No") to another question in the same survey (the first question about brand recognition).
With answer (B) the facts just don't fit the language here. Playing devil's advocate, (B) would have to be addressing the second question in the stimulus. But is that question "likely" to generate a "large" variety of responses? It doesn't appear so. The question simply asked "whether or not they thought....they were of high quality." That's really a yes or no question. And are those responses "difficult" to group into a "manageable number" of categories? Again, it doesn't appear so. There would have to be more information provided to make that call.
One final point that may be the root issue here. It was not the respondents who rated the products. They simply provided "yes" or "no" answers to the two questions. The survey takers rated the products based on the responses given to them by the respondents.
Hope that helps!
PowerScore LSAT/GMAT/SAT Instructor
5 posts • Page 1 of 1