Wednesday, March 28, 2018

Bone-in-the-Nose Medicine "Finds" New Organ Underlying Taoist Alchemy/Chi

Nature |  Confocal laser endomicroscopy (pCLE) provides real-time histologic imaging of human tissues at a depth of 60–70 μm during endoscopy. pCLE of the extrahepatic bile duct after fluorescein injection demonstrated a reticular pattern within fluorescein-filled sinuses that had no known anatomical correlate. Freezing biopsy tissue before fixation preserved the anatomy of this structure, demonstrating that it is part of the submucosa and a previously unappreciated fluid-filled interstitial space, draining to lymph nodes and supported by a complex network of thick collagen bundles. These bundles are intermittently lined on one side by fibroblast-like cells that stain with endothelial markers and vimentin, although there is a highly unusual and extensive unlined interface between the matrix proteins of the bundles and the surrounding fluid. We observed similar structures in numerous tissues that are subject to intermittent or rhythmic compression, including the submucosae of the entire gastrointestinal tract and urinary bladder, the dermis, the peri-bronchial and peri-arterial soft tissues, and fascia. These anatomic structures may be important in cancer metastasis, edema, fibrosis, and mechanical functioning of many or all tissues and organs. In sum, we describe the anatomy and histology of a previously unrecognized, though widespread, macroscopic, fluid-filled space within and between tissues, a novel expansion and specification of the concept of the human interstitium.

Independent |  The team behind the discovery suggest the compartments may act as “shock absorbers” that protect body tissues from damage.

Mount Sinai Beth Israel Medical Center medics Dr David Carr-Locke and Dr Petros Benias came across the interstitium while investigating a patient’s bile duct, searching for signs of cancer.

They noticed cavities that did not match any previously known human anatomy, and approached New York University pathologist Dr Neil Theise to ask for his expertise.

The researchers realised traditional methods for examining body tissues had missed the interstitium because the “fixing” method for assembling medical microscope slides involves draining away fluid – therefore destroying the organ’s structure.  

Instead of their true identity as bodywide, fluid-filled shock absorbers, the squashed cells had been overlooked and considered a simple layer of connective tissue.

Having arrived at this conclusion, the scientists realised this structure was found not only in the bile duct, but surrounding many crucial internal organs.

“This fixation artefact of collapse has made a fluid-filled tissue type throughout the body appear solid in biopsy slides for decades, and our results correct for this to expand the anatomy of most tissues,” said Dr Theise.

Sunday, March 18, 2018

Why Bone-in-the-Nose Medicine Trumps Medical Science Most of the Time...,

medium  |  A fundamental tenet of science is that findings must be reproduced. One experiment does not establish new truths. The results have to be replicated by others using the methods described by the original investigators. Replication is key to ensuring that conclusions aren’t spurious. Nevertheless, science is currently plagued by hordes of irreproducible study results.
“ More than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments.”
Certainly, misidentification of cells is a major contributor to the replication crisis in basic biological science. However, statistics and publication bias combine to form another formidable pseudo-scientific edifice that churns out irreproducible results across scientific genres and misleads the public.

The infamous P-value lies at the heart of the matter. Simply put, the P-value is an arbitrary estimate of the likelihood that results of a given experiment are due to chance. The cutoff widely accepted across scientific disciplines is 5%. In other words, as long as the statistics say that the likelihood a given result is due to chance alone is 5% or less, then the result is considered “significant.” That might sound good at first glance, but when examined a little more closely, in conjunction with the concept of publication bias, the limitations rapidly mount.

The significance of the 5%, or .05, P-value is utterly arbitrary. A man named Ronald Fisher made it up back in the 1920s. It’s based on the rough approximation of how much of a normal (Gaussian) distribution will fall within two standard deviations of the mean — about 95%. (I’m not going to get into the problems with the normal distribution in this post, but I will recommend that anyone interested in this concept read Nassim Nicholas Taleb’s book The Black Swan.)

A P-value of .05 implies that one result in 20 will be due to chance. But how many millions of results are obtained from scientific experiments each year around the world? An incalculable number. It’s virtually guaranteed that thousands of results due to chance alone emerge from the realm of theory and intrude on what we presume to call reality each year. And those are the results that get published.

Scientists working in academia must, as the saying goes, publish or perish. And the journals in which those anxious scientists try to publish their results need to make money, which necessitates reader interaction. Results that are not “statistically significant” are boring. No reader wants to pay for a journal full of articles that say “we did this study using really careful methods, and nothing happened, it didn’t work. End of story.” If science were fully transparent, and results of all experiments were published, however, this is exactly what the vast majority of papers would say.

The failure of negative study results to ever see the light of day creates staggering wastes. It’s likely that many basic experiments have been repeated over and over again, with uninteresting results, and subsequently never published. Then, another research group comes along and does the experiment again (because they didn’t know about the previous null results) and, by chance alone, finds a positive result. Of course, that result is interesting and gets published. This basic cycle is why John Ioannidis’s now-famous 2005 paper was titled “Why Most Published Research Findings Are False.”

Friday, January 26, 2018

Alzheimer's Is Type 3 Diabetes...,

theatlantic |  In recent years, Alzheimer’s disease has occasionally been referred to as “type 3” diabetes, though that moniker doesn’t make much sense. After all, though they share a problem with insulin, type 1 diabetes is an autoimmune disease, and type 2 diabetes is a chronic disease caused by diet. Instead of another type of diabetes, it’s increasingly looking like Alzheimer’s is another potential side effect of a sugary, Western-style diet.

In some cases, the path from sugar to Alzheimer’s leads through type 2 diabetes, but as a new study and others show, that’s not always the case.

A longitudinal study, published Thursday in the journal Diabetologia, followed 5,189 people over 10 years and found that people with high blood sugar had a faster rate of cognitive decline than those with normal blood sugar—whether or not their blood-sugar level technically made them diabetic. In other words, the higher the blood sugar, the faster the cognitive decline.
“Dementia is one of the most prevalent psychiatric conditions strongly associated with poor quality of later life,” said the lead author, Wuxiang Xie at Imperial College London, via email. “Currently, dementia is not curable, which makes it very important to study risk factors.”

Melissa Schilling, a professor at New York University, performed her own review of studies connecting diabetes to Alzheimer’s in 2016. She sought to reconcile two confusing trends. People who have type 2 diabetes are about twice as likely to get Alzheimer’s, and people who have diabetes and are treated with insulin are also more likely to get Alzheimer’s, suggesting elevated insulin plays a role in Alzheimer’s. In fact, many studies have found that elevated insulin, or “hyperinsulinemia,” significantly increases your risk of Alzheimer’s. On the other hand, people with type 1 diabetes, who don’t make insulin at all, are also thought to have a higher risk of Alzheimer’s. How could these both be true?

Schilling posits this happens because of the insulin-degrading enzyme, a product of insulin that breaks down both insulin and amyloid proteins in the brain—the same proteins that clump up and lead to Alzheimer’s disease. People who don’t have enough insulin, like those whose bodies’ ability to produce insulin has been tapped out by diabetes, aren’t going to make enough of this enzyme to break up those brain clumps. Meanwhile, in people who use insulin to treat their diabetes and end up with a surplus of insulin, most of this enzyme gets used up breaking that insulin down, leaving not enough enzyme to address those amyloid brain clumps.
According to Schilling, this can happen even in people who don’t have diabetes yet—who are in a state known as “prediabetes.” It simply means your blood sugar is higher than normal, and it’s something that affects roughly 86 million Americans.

Schilling is not primarily a medical researcher; she’s just interested in the topic. But Rosebud Roberts, a professor of epidemiology and neurology at the Mayo Clinic, agreed with her interpretation.