diagnosis

RECENT POSTS

Pathologist: What Women Need To Know About Breast Biopsy Accuracy

A breast biopsy which illustrates the grey zone of pre-cancer (Courtesy of Dr. Michael J. Misialek)

A breast biopsy which illustrates the grey zone of pre-cancer (Courtesy of Dr. Michael J. Misialek)

By Michael J. Misialek, MD

If you’re a woman who has ever had a breast biopsy, you may be asking yourself a few serious questions:

“How do I know if my breast biopsy is completely accurate?” And, “Who is the pathologist reading the biopsy, and what is their level of training?”

Many more patients are asking these and similar questions following widespread media coverage on a Journal of the American Medical Association (JAMA) study, which casts doubt about the accuracy of interpreting these biopsies.

Let’s break the study down and ease some anxiety. Perhaps most importantly, this provides a great opportunity to learn about one of the lesser know medical specialties, pathology…which is what I do.

The JAMA study, “Diagnostic Concordance Among Pathologists Interpreting Breast Biopsy Specimens,” revealed the following key finding:

• Overall agreement between individual pathologists’ interpretations and that of an expert consensus panel was 75 percent, with the highest agreement on invasive breast cancer and lower levels of agreement for ductal carcinoma in situ (DCIS) and atypical hyperplasia.

What this means is that the agreement between a general pathologist and an expert was excellent for breast cancer (those with the ability for metastasis), but varied significantly for early cancers and high-risk pre-cancers.

While the study’s findings may not be surprising to physicians who understand the challenges of diagnosing complex breast cases, news of the article could lead to unnecessarily heightened anxiety for patients and the public as breast cancer is a highly publicized and pervasive disease.

The study confirmed that the majority of breast pathology diagnoses, especially at either end of the spectrum (benign disease and invasive breast cancer) are accurately made by practicing pathologists regardless of practice setting. The overall rate of agreement for invasive breast cancer cases was 96 percent.

Issues with diagnostic disagreement mainly center on the borderline cases, between atypical hyperplasia, that is, pre-cancer, and DCIS, early cancer.

Why does this matter? Overdiagnosis can lead to unnecessary surgery, treatment and anxiety. Underdiagnois can lead to a delay in treatment. The bottom line is that experience matters.

Factors that contributed to greater disagreement included: a low case volume, small practice size, nonacademic practice and high breast density.

The study has many weaknesses. Chief among them was that only a single slide per case was given to each pathologist. As a practicing pathologist, this never happens. I will review multiple slides, often ordering several additional deeper sections and ancillary special stains, studying each carefully. This practice was prohibited in the study.

Additionally, the study cases were a mixture of core biopsy and excision specimens. A core biopsy is obtained using a needle, often by a radiologist, in which a small core of tissue is removed. An excision is a “lumpectomy” which is done in the operating room where a large section of breast tissue is removed. Diagnostic criteria vary between a needle core and excision. Often times it is not necessary to render an exact diagnosis on the core biopsy, but rather recognize an abnormality and recommend an excision for which additional tissue will clarify the diagnosis.

Even the experts disagreed in the study (75 percent initial agreement then 90 percent after discussion).

This illustrates the fact that pathology is both a science and art. Experts may stress slightly different criteria in their pathology training programs. The “eye of a pathologist” is a difficult measure to quantify and is dependent on multiple factors that best function in real time, not an artificial study.

Another weakness is that there is no evidence that the experts were more accurate in predicting outcomes than test subjects. Perhaps most importantly, a second opinion was not allowed in the study, even when study participants indicated uncertainty. These are in fact the very cases that would most likely have been shown around, sent out for consult and further worked up.

It is not realistic to introduce such a large caseload of breast biopsies that are heavily weighted towards atypical hyperplasia and DCIS. Since these borderline cases represent only a small fraction of breast biopsies in actual practice, diagnostic agreement in routine practice is higher than that reported in this study. No clinical information other than patient’s age was given to the study pathologists, and no imaging findings were included. In actual practice, integration of the clinical setting and imaging findings is routinely used in making a diagnosis.

The findings are not unique to pathology. All of medicine has grey zones, where controversy often exists. The study does have an important message for pathologists. As noted in the accompanying editorial, it should serve as a “call to action.” A better, more reproducible definition of atypical hyperplasia is needed.

The article highlights the need for an active quality management program in surgical pathology that includes targeted review of difficult or high risk cases. The College of American Pathologists (CAP) and the Association of Directors of Anatomic and Surgical Pathology have been developing an evidence-based guideline expected to be released in May to provide recommendations to reduce interpretive diagnostic errors in anatomic pathology.

The CAP is proactively addressing educational opportunities through advanced breast pathology training programs designed to provide a route for pathologists to demonstrate their expertise regardless of the setting in which they practice.

Patients can take steps to help ensure their breast biopsy is read accurately:

o Inquire about the pathology laboratory that will examine your tissue sample. Is the laboratory accredited? The CAP accredits more than 7,600 laboratories worldwide and provides an online directory for patients. Continue reading

Study: Most Doctors Flunk Math Of Medical Test Accuracy

mathtest

File under: “Does not inspire confidence.”

You’ve just been screened for a rare disease, one that only strikes 1 in 1,000 people. You test positive, your doctor tells you. Your heart drops into your stomach. “Is there any chance the test could be wrong?” you ask, your voice tinged with pleading.

“There’s a very small chance,” your doctor replies. “The test has a false positive rate of 5 percent. But that means there’s a 95 percent chance that you do have the disease.”

Bzzzzzzz. That’s the jarring sound of the game show buzzer that means “Wrong. Wrong. Wrong.”

A new study in the journal JAMA Internal Medicine posed just such a scenario to “24 attending physicians, 26 house officers, 10 medical students, and 1 retired physician” at a Boston‐area hospital. The hospital is not named, but all were affiliated with not-shabby medical schools: Harvard and Boston University.

And the vast majority blew the question. (Which was: “If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person’s symptoms or signs?”)

Close to half gave the answer “95 percent.” Not even close. The correct answer is 2 percent. (For an explanation of the math, check out Tom Siegfried’s excellent Science News post, which uses the analogy of a baseball player who flunks a drug test.)

The study — “Medicine’s Uncomfortable Relationship With Math” — replicated a similar math check done in 1978, and found little progress: 23 percent got the answer right in 2013, compared to 18 percent of a similar group in the 1978 study.

Yikes. What are patients — and doctors — to do about such medical innumeracy? I contacted the Boston-based Informed Medical Decisions Foundation, and spoke to research director Carrie Levin. Continue reading

Coming Soon: $10M ‘Tricorder’ X Prize For Self-Diagnosis

Jim, I' m a doctor, not a device developer!

Remember how in Star Trek, Dr. McCoy could point his tricorder at things and magically — oops, I mean, “scientifically” — determine all kinds of chemical components and medical diagnoses?

The X Prize Foundation — best known for offering prizes for space flight and cheap genome decoding — is about to put up $10 million for the invention of a medical device in the spirit of that tricorder: It must be able to diagnose 15 common medical conditions within three days, with no intervention from a health care professional.

Eileen Bartholomew, the foundation’s senior director of life sciences prize development, spoke today at the Center for Connected Health symposium in Boston, and said the tricorder prize is expected to be officially launched in 2012.

She offers some rough guidelines in the video below, including examples of the conditions it should be able to diagnose, from hypertension to sleep apnea. The prize, underwritten by Qualcomm, was first previewed in May — here’s the announcement — but the details have taken more shape since then. The contest could last about three years, Eileen said.

An interesting twist: Did you ever think about the tricorder as a tool for patient empowerment? That seems to be part of the X-Prize idea: The real-life tricorder isn’t meant for latter-day Dr. McCoys, it’s meant for patients, for us. If it can be developed, each of us could unleash our inner Bones…