LSAT and Law School Admissions Forum

Get expert LSAT preparation and law school admissions advice from PowerScore Test Preparation.

User avatar
 Dave Killoran
PowerScore Staff
  • PowerScore Staff
  • Posts: 5852
  • Joined: Mar 25, 2011
|
#27070
Hi Angel,

In the Parallel chapter, I talk about this idea on pages 514-515, starting with the paragraph that opens with, "When matching conclusions, you must match the certainty level or intent of the conclusion in the stimulus, not necessarily the specific wording of the conclusion." That idea then re-appears throughout that chapter.

The idea of force as I've referenced it above often relates to modifiers, which is referenced inside that Parallel discussion, and also separately discussed on pages 49-50, as well as throughout the book.

Thanks!
 avengingangel
  • Posts: 275
  • Joined: Jun 14, 2016
|
#27080
PERFECT> Thanks. I haven't gotten to the Parallel Chapter Yet, so I've marked it and will be sure to pay close attention!!
 jrc3813
  • Posts: 53
  • Joined: Apr 16, 2017
|
#34470
When I did this question I got it right but I'm having a hard time describing why. I just want to make sure I can get a similar question right in the future.

When I read the question I thought to find an answer that would cast doubt on the accuracy of the data somehow. I narrowed it down to B and C quickly and I chose C. Would it be right to say that B does weaken the argument but just not as much as C? With B I thought they were implying a sample size problem with people who had received treatment less than 6 months and responded to the survey. And therefore you couldn't accurately compare the two groups. But they didn't outright say the sample was too small so I rejected it.

For C is it saying that the causality is reversed? In other words, because early treatment was effective, people were more likely to stay in treatment. And so at least some proportion of people in the post-6 months group were actually effectively treated from earlier treatment? The second statement confused me though. Is it just reaffirming the same idea?

As I was taking the test the answer intuitively made sense, but I just want to be sure I wasn't just a bit lucky.
User avatar
 Jonathan Evans
PowerScore Staff
  • PowerScore Staff
  • Posts: 726
  • Joined: Jun 09, 2016
|
#34481
JRC,

Good question about a challenging problem! You did a good job identifying that the point at issue here involves whether the data in the survey backs up the author's analysis when subjected to scrutiny. The challenge in this question involves the ease with which the test-writers can create attractive incorrect answers by manipulating the data to little effect, as with Answer Choice (B) as you noted.

To develop this point further, you should note that (as with most problems), there are two distinct approaches to getting this right, both of which work with each other as well.
  1. Determine exactly how we could directly weaken the argument, and come up with a strong prephrase to this effect.
  2. Get a general sense of what the flaw is, and rule out incorrect answers that do not effectively exploit this flaw.
In your case, you have taken the second approach here. Your approach is valid and a strong way to get to the solution, especially when problems present difficulty. Let's work on improving our ability to spot what's wrong in the incorrect answers and to improve our ability to predict the correct answer.

First, zero in on the distinction between the two groups surveyed. What is similar and dissimilar? While we don't know the baseline sample size of each group, we are given a comparison between percentages. In other words, the data includes a percentage who thought the treatment "made things a lot better" for both categories.

Now, certainly, given no other salient differences between these groups other than how long they received treatment, there remain a number of ways to attack the validity of the conclusion. For instance:
  1. The sample size of one or both these groups might be far too small.
  2. The percent of people in the "over six months" group who found the treatment deleterious might far exceed the same group in the "under six months" group.
  3. There could be some other flaw in the survey, how it was conducted, etc.
However, there remains a much more powerful and likely way the LSAT will weaken an argument such as this:
  • Demonstrate an inherent confounding distinction between these two groups.
In other words, this argument will quite likely turn not on some statistical artifact but rather on whether these groups are actually analogous to begin with. If we could show that the "under six month" group and the "over six month" group are qualitatively different, we would have a powerful and slightly less obvious way to attack this conclusion.

The above issue is precisely what happens with (B) and (C). Answer Choice (B) indicates that those who had received treatment for longer than six months were more likely to respond; however, this distinction is not in and of itself significant enough to weaken this argument very much. Unless the numbers are so disparate or one group is very small, the underlying number of respondents does not in and of itself attack the usefulness of the comparison. (B) is an excellent trap answer because it invites you to bring in some additional assumptions: i.e. that the under six-month group might have been very small. Well it might have been, or it might not have been. Who knows?

That's where (C) shines. It presents a direct qualitative distinction between these two groups that straight-up harms the conclusion. The latter group is a self-selected sample, likely in large part a subset of members of the under six month group who found therapy helpful and continued. Thus, if we had 1000 members in the under six month group, 200 of whom found therapy helpful, the over six month group might have consisted of only 560 people, with the same 200 still finding therapy helpful. We have shifted from 20% to 36% without any increase in the number of people who find therapy helpful. Those who didn't find it helpful dropped out!

For another problem that tests a similar concept, compare PrepTest 8, June 1993, LR Section 1, Question 13.
 lsatfighter
  • Posts: 26
  • Joined: Sep 26, 2018
|
#62889
Here's my analysis of this whole question.

A is wrong, because it doesn't matter whether or not 10% of respondents from the longer-term treatment group spoke negatively about the treatment. It's irrelevant and it does nothing to weaken the conclusion that longer-term treatment is more effective than shorter-term treatment.

B and D are wrong, because both of them use the word "likely." Likelihood does not imply reality/certainty. Just because the events mentioned in B and D were likely does not mean that they actually happened. For all we know, quite the opposite could have happened or maybe the events mentioned in B and D didn't even happen at all.

E is wrong, because it's too broad and general. We have no way of knowing whether the psychologists specifically mentioned in the stimulus were among the "many psychologists" mentioned in E who encourage their patients to receive longer treatment.

As for answer choice C, I'm unsure as to exactly how it weakens the argument. I think that C weakens the argument in one of two possible ways:

1) Longer-term treatment is more effective than shorter-term treatment (the cause/conclusion) ----> Greater percentage of positive survey responses from longer-term treatment group than the shorter-term treatment group (the effect/premises).

C weakens the argument by introducing an alternative cause, which is the placebo effect. The words "FEEL they are doing well in treatment" in C indicate a placebo effect. C indicates that it was a placebo effect and not the effectiveness of longer-term treatment which led to the positive survey responses.

2) Longer-term treatment (the cause) ----> Higher level of effectiveness/Greater percentage of people who feel better (the effect).

C weakens the argument by reversing the cause and effect. C basically says, "if you're feeling better, then you're going to remain in treatment longer."

Explanation #1 is an easier one for me to understand. Explanation #1 just makes more sense to me. Are both explanations valid? Can you please provide me with a further explanation of why C is correct and the other answer choices are wrong? Thank you in advance.
 Robert Carroll
PowerScore Staff
  • PowerScore Staff
  • Posts: 1787
  • Joined: Dec 06, 2013
|
#62952
fighter,

Answer choice (A) doesn't weaken because I have no basis for comparing the two groups. It gives me information about one group. It could be that perceived effectiveness doesn't match true effectiveness. If so, knowing that 10% of people thought treatment wasn't effective tells me nothing about whether it really was effective. But even if perception correlates with reality here, I don't know the corresponding percentage for the other group. So I can't compare the cases.

Answer choices (B) and (D) aren't wrong because they use the word "likely". In fact, the correct answer choice uses the phrase "tend to", which is a qualified, non-absolute phrase. Further, distrusting the facts presented in the answer choices is a non-starter for a Weaken question - note the question says "Which one of the following, if true..." (emphasis mine)

You're always supposed to take the statement in any answer choice as given for a Weaken question. Wrong answers aren't wrong because they might not be true - they're wrong because, even if true, they wouldn't weaken the argument. It IS valid to consider whether what an answer choice says is "likely" would weaken the argument if that fact were merely "likely" and not "certain". But that the likelihood is true should be taken as given.

The issues with answer choice (B) and (D) are that, even if the relevant likelihoods are true, they don't affect the argument.

Answer choice (E) is wrong because it doesn't matter whether the "many psychologists" coincided with those in the stimulus. It just doesn't matter. Psychologists may encourage their patients to receive prolonged treatment, but does that make any difference with regard to the effectiveness? Not at all.

With regard to answer choice (C), as Jonathan pointed out in the post immediately before yours, the answer gives an explanation for the increasing percentage of responses that claim treatment "made things a lot better" that doesn't speak to the effectiveness of longer treatment. Simply put, if only people happy with treatment stick with it, then those unhappy with treatment will tend to drop out at higher rates, leaving behind a higher and higher proportion of people who are happy with it. Thus, you'd EXPECT the percent to go up, no matter how effective treatment is! This is what the answer does - it explains that people happy with the treatment will stay in it, while other drop out, making the "happy with it" people make up an increasing portion of the total as time goes on. And none of that relates to actual effectiveness. So the answer provides an alternative explanation of the facts that shows they might not indicate anything about effectiveness.

Your first explanation doesn't seem to fit because the connection between felt efficacy and actual efficacy isn't broken by answer choice (C). Ultimately, in the stimulus we have reported information that may or may not reflect reality. An answer choice that made it more likely that people's reported states didn't match their actual states would be good, but this one doesn't do it. I think that talk of a "placebo effect" misses the mark because the placebo effect IS an effect. The treatment may be effective only because people think it is! But then it would be effective. If longer treatment increased the placebo effect, that could actually be good for the argument. So I don't think it's a helpful way of looking at the situation.

The second explanation actually seems closer to the mark, but Jonathan's explanation fills in the gaps. Why is longer treatment leading to higher reported satisfaction? Because the treatment is better, or because satisfied people stick around while others don't, artificially boosting the percentage as time goes on? The latter explanation is how answer choice (C) weakens the argument.

Robert Carroll
 Jerrymakehabit
  • Posts: 52
  • Joined: Jan 28, 2019
|
#62973
Nikki Siclunov wrote:Chris,

Before we begin, a word of caution: deconstructing the argument into premises/conclusions does not mean cutting and pasting the language of the stimulus into your post. For one thing, this violates LSAC copyright regulations. For another, it shows that you didn't "process" the information. Always simplify what you're reading and distill the essential information. Here's how I'd break this down:
Premise: 20% of those in the < 6-month group said the help was effective
Premise: 36% of those in the >6-month group said the help was effective
Conclusion: treatment lasting >6 months is more effective than treatment lasting < 6 months.
What's wrong with this line of reasoning? Just because a higher proportion of the people in the >6-month group said it's effective doesn't mean that longer treatment is necessarily more effective. What if this group is biased? We are looking for some underlying bias that could explain the difference in reported effectiveness. That's all I'd prephrase here: I'm looking for a bias they didn't control for, something skewed about the data.

Answer choice (C) matches that prephrase. If those who are doing poorly tend to quit earlier, then you'd expect that over a longer period of time a higher proportion of those who are doing well will remain in treatment. Indeed, by that line of reasoning, after 2 years of treatment, maybe only 5 people will be left in treatment, every single one of them absolutely thrilled with the help they are getting. Does that mean that treatment lasting longer is more effective? Of course not! The other people have already quit, making it impossible to tell the optimal length of time it takes for the treatment to actually work.

How likely it is that people respond to the survey makes no difference here, as both percentages (20% and 36%) are of those responding to the survey. Whether either group is more likely to respond makes no difference whatsoever.

Hope this clears things up! Questions involving numbers and percentages are discussed in Lesson 9 of the Full Length LSAT course, so make sure to check that out.

Thanks!
Hi Nikki,

In your explanation "Does that mean that treatment lasting longer is more effective? Of course not! ". I feel like the answer is yes. After 2 years of treatment, only 5 people are left and everyone of them feel thrilled. So all the people who left should have stayed longer so that they would feel the effective result too. Then it demonstrates that longer treatment is more effective than shorter treatment which strengthens the argument. What is wrong with my logic here? Can you please help?

Thanks
Jerry
 Malila Robinson
PowerScore Staff
  • PowerScore Staff
  • Posts: 296
  • Joined: Feb 01, 2018
|
#62987
Hi Jerry,
Your confusion may be coming from what you may have assumed has caused those 5 remaining people to be thrilled with that they feel is an effective treatment. Is it the treatment? Maybe. But maybe it was something like a specific trait that all 5 other those people had (and other people do not necessarily have), which made the treatment more effective for them. If the latter is true, and if people who feel their treatment is working are more likely to stay in treatment longer, it would not lead to the conclusion that longer treatment is more effective.
Hope that helps,
-Malila
User avatar
 lavalsat
  • Posts: 13
  • Joined: Jan 26, 2021
|
#84602
Hello,

I am struggling with this question. For answer C, I am having a hard time understanding how that truly weakens the argument. Regarding the topic of "psychological treatment" wouldn't "patients who feel they are doing well" = effective treatment?

If we say patients who feel doing well equals effective treatment, then I do not see how C weakens the argument.

Thanks!
 Jeremy Press
PowerScore Staff
  • PowerScore Staff
  • Posts: 1000
  • Joined: Jun 12, 2017
|
#84628
Hi lavalsat,

There are a couple good posts above that do a great job of explaining why answer choice C weakens the argument. Here are the relevant excerpts from them:
Jonathan Evans wrote:there remains a much more powerful and likely way the LSAT will weaken an argument such as this:
  • Demonstrate an inherent confounding distinction between these two groups.
In other words, this argument will quite likely turn . . . on whether these groups are actually analogous to begin with. If we could show that the "under six month" group and the "over six month" group are qualitatively different, we would have a powerful and slightly less obvious way to attack this conclusion.

. . . (C) shines. It presents a direct qualitative distinction between these two groups that straight-up harms the conclusion. The latter group is a self-selected sample, likely in large part a subset of members of the under six month group who found therapy helpful and continued. Thus, if we had 1000 members in the under six month group, 200 of whom found therapy helpful, the over six month group might have consisted of only 560 people, with the same 200 still finding therapy helpful. We have shifted from 20% to 36% without any increase in the number of people who find therapy helpful. Those who didn't find it helpful dropped out!

For another problem that tests a similar concept, compare PrepTest 8, June 1993, LR Section 1, Question 13.
Robert Carroll wrote:
With regard to answer choice (C), as Jonathan pointed out in the post immediately before yours, the answer gives an explanation for the increasing percentage of responses that claim treatment "made things a lot better" that doesn't speak to the effectiveness of longer treatment. Simply put, if only people happy with treatment stick with it, then those unhappy with treatment will tend to drop out at higher rates, leaving behind a higher and higher proportion of people who are happy with it. Thus, you'd EXPECT the percent to go up, no matter how effective treatment is! This is what the answer does - it explains that people happy with the treatment will stay in it, while other drop out, making the "happy with it" people make up an increasing portion of the total as time goes on. And none of that relates to actual effectiveness. So the answer provides an alternative explanation of the facts that shows they might not indicate anything about effectiveness.

Your first explanation doesn't seem to fit because the connection between felt efficacy and actual efficacy isn't broken by answer choice (C). Ultimately, in the stimulus we have reported information that may or may not reflect reality. An answer choice that made it more likely that people's reported states didn't match their actual states would be good, but this one doesn't do it. I think that talk of a "placebo effect" misses the mark because the placebo effect IS an effect. The treatment may be effective only because people think it is! But then it would be effective. If longer treatment increased the placebo effect, that could actually be good for the argument. So I don't think it's a helpful way of looking at the situation.

The second explanation actually seems closer to the mark, but Jonathan's explanation fills in the gaps. Why is longer treatment leading to higher reported satisfaction? Because the treatment is better, or because satisfied people stick around while others don't, artificially boosting the percentage as time goes on? The latter explanation is how answer choice (C) weakens the argument.
What both Robert and Jonathan do a good job of showing here is that answer choice (C) isn't just talking about patients who feel they are doing well. It's also talking about those who are doing poorly, telling us that they quit earlier. So according to the answer, patients who are happy with the treatment stick around, and patients who aren't doing well (patients for whom the therapy is ineffective) drop out earlier. This means that, although there would be a higher percentage of patients still in treatment at the 6 month (or longer) mark who think treatment is working, there could also be a substantial number for whom treatment wasn't working and dropped out before that point. That's enough of a hook to say that, for at least some patients, the therapy itself is not effective (regardless of length of treatment; i.e., even if they'd stuck around they could very well end up with the same outcome, that the therapy wasn't working).

I hope this helps!

Get the most out of your LSAT Prep Plus subscription.

Analyze and track your performance with our Testing and Analytics Package.