In the same paper he said “ It was discussed that in theory the most rational combinations with rapamycin are mild calorie and fat restriction, physical exercise and metformin [52]. Metformin may in theory counteract rapamycin-induced gluconeogenesis in the liver. And this rational drug combination may be also considered as treatment of type 0 diabetes.”

I fast, exercise and take metformin on my rapa dosing day…just to be careful. Here’s the paper for anyone who wants to read it.

6 Likes

I’m assuming that acarbose or an SGLT2 inhibitor would serve the same function.

4 Likes

Thank you so much for digging up this information :blush:!

1 Like

The dirty little secret of animal studies (and not only), is that while the lay public reads these as it were the gospel, those who actually work in the labs as techs and researchers see a whole different world as the sausage is made. There is always something or other that is not being taken into account, be it lab temperature when the ac fails one summer, to communicable diseases that spread among the cages, techs who carry trace substances on their coats from one part of the lab to another, chow that was delivered and sat out in the summer heat for hours before the lab opens and so on endlessly… just like in any walk of life.

Talk amongst techs and researchers is often scuttlebut about how this lab or that lab is known for very poor animal husbandry and the results should always be seen in that light and so on. It’s life, not gospel.

2 Likes

this is why i laugh every time a new study comes out and people get excited or discuss it with fanfare. i don’t claim to be the world’s greatest scientist, but I have worked with them and learned from them, and what the public sees vs what underlies the beautiful figures and legends is like seeing a photo of a happy instagram couple vs what goes on behind their private doors.

when reading papers, the most important information is often 1) buried in the methods and supplemental methods/data or 2) unavailable since the researchers chose not to disclose it. And perhaps even more important is knowing who the first author is on a personal level and what they did step by step (but of course this is unavailable information). I’ve seen so many papers that have a good story, good logic, apparent good execution, but only by having been in the field and knowing what the top 1% of execution of experiments looks like, would I have known that in fact that data was all, no hyperbole, crap.

It is a fallacy to believe that two experiments, done using the same methods, should result in the same outcome. I cannot even begin to explain how underestimated the human variable is.

3 Likes

OK…we’ve certainly had a lot of discussion here about the problems with studies and papers published in science and medical journals - and the bad, strictly for profit journals - and the bad studies out of China, etc. And as you said, knowing the reputation of the lead author/s and the research facility helps. But are you saying that only the “top 1%” are any good? or that the majority are “crap”? If that’s the case, just how are we, the little people, supposed to get any useful information? Ask the 8 ball? Use the ouija board? Oh, that’s right, asking AI is just as good as those…but, wait, isn’t AI based on all those faulty studies?

Sorry for my attitude…but this is RapaNews, after all, scientific studies, and links to them are our bread and butter…so when you attack them, it’s like burning the Quran in the middle of Tehran.

1 Like

LOL, right. The main guy on a study who doesn’t even set foot in the lab, but sits in his office, and his underlings do the actual study, and who in turn often don’t know what the lab techs are doing in the off hours at the lab, where it’s less time consuming to not follow protcol and take shortcuts thus undermining the whole study.

In the monkey study, it was comical that the actual researchers involved had no inkling that there were any shenanigans afoot, and it was a visiting scientist, a guest, Nir Barzilai, who noticed that there were some weight discrepancies and that the only explanation was sabotage. Only then did they launch an investigation and uncovered the surreptitious feeding that’s been going on for at least a year while they were blissfully unaware that anything was wrong. Left to their own devices, they’d have published a study saying “yep, nothing to see here”, and then everyone else would be discussing, and citing, and basing conclusions on a completely bogus study.

Let us remember, that according to thorough investigations a big majority of f. ex. the most cited studies in various cancer drugs cannot be replicated. Replication is the most basic requirement of the scientific method. It is why the ITP uses three separate labs simultaneously to test their interventions - they’re not doing it just for the heck of it. But for cost, convenience and logistical reasons pretty much nobody else does it. And that’s how you get the vast majority of all studies being crap (like the investigator from Stanford showed).

2 Likes

Very funny…so it was like Homer Simpson in charge of the nuclear power plant.

Let me post links, because everything I could find did not say “the vast majority”, although I agree that it is a significant problem that shows up in the science news with some regularity (and is mentioned by people like Matt Kaerberlein). First Wikipedia:

https://en.wikipedia.org/wiki/Why_Most_Published_Research_Findings_Are_False

Saying that other statisticians thought that the Stanford paper was exaggerated.

Biostatisticians Jager and Leek criticized the model as being based on justifiable but arbitrary assumptions rather than empirical data, and did an investigation of their own which calculated that the false positive rate in biomedical studies was estimated to be around 14%, not over 50% as Ioannidis asserted.

Statistician Ulrich Schimmack reinforced the importance of the empirical basis for models by noting the reported false discovery rate in some scientific fields is not the actual discovery rate because non-significant results are rarely reported. Ioannidis’s theoretical model fails to account for that, but when a statistical method (“z-curve”) to estimate the number of unpublished non-significant results is applied to two examples, the false positive rate is between 8% and 17%, not greater than 50%.

Ironic, isn’t it, that a Stanford study claiming that “Most Published Research” was false was, itself, wildly inaccurate.

Then from AI because it’s AI.

According to various studies, estimates suggest that around 1-2% of scientific papers published are considered “fake” or closely resemble paper-mill works, with the rate potentially reaching 3% in fields like biology and medicine; however, the true percentage of faulty scientific papers is likely higher due to issues like questionable research practices that may not be readily identifiable as outright fabrication, with some studies indicating a potentially much larger proportion of unreliable research findings depending on the field and study design.

A new discussion of scientific fraud including the case that brought down the president of Stanford.

https://law.unh.edu/blog/2023/07/legal-impact-research-controversy-stanford-university

Another (newer) study out of Stanford.

https://news.stanford.edu/stories/2015/11/fraud-science-papers-111615

From the University of Kentucky - compilation of articles regarding Research Misconduct.

https://www.research.uky.edu/research-misconduct/research-misconduct-news

And again from Wikipedia - List of scientific misconduct incidents

https://en.wikipedia.org/wiki/List_of_scientific_misconduct_incidents

In conclusion, I’ll again quote reaction to the original Stanford study and paper by Ioannidis.
" In commentaries and technical responses, statisticians Goodman and Greenland identified several weaknesses in Ioannidis’ model.[10][11] Ioannidis’s use of dramatic and exaggerated language that he “proved” that most research findings’ claims are false and that “most research findings are false for most research designs and for most fields” [italics added] was rejected."

And here’s a brand new study from September 24, 2024.

1 in 7 scientific papers is fake, suggests study that author calls ‘wildly nonsystematic’

Heathers’ study pulls data from 12 different analyses from the social sciences, medicine, biology, and other fields of research. All those studies have one thing in common: The authors of each used various online tools to estimate the amount of fakery taking place in a set of papers.

“There’s a really persistent commonality to them,” Heathers said. “The rough approximation for where we end up is that one in seven research papers are fake.”

Heathers said he decided to conduct his study as a meta-analysis because his figures are “far flung.”

https://retractionwatch.com/2024/09/24/1-in-7-scientific-papers-is-fake-suggests-study-that-author-calls-wildly-nonsystematic/

1 Like

of course i should clarify im using a teeny tiny bit of hyperbole hehe. this is just my opinion, of course, but i would frame it as: the variability in how an experiment is executed (despite what might seem to be a straightforward methods/protocol that anyone should theoretically be able to follow…the whole point of papers and replicability) by individual lab personnel is hugeeee. this might not appear to be the case in very large labs where techniques are supposedly ‘standardized’, and thus personnel should be able to be swapped or interchangeable. But in my experience, it is anything but the case.

It extends to the data analysis, which there is tons of pre-processing prior to the final ‘cleaned’ data set. How individual scientists choose to filter those depends on their own aptitude (a lot of times there aren’t standardized protocols for how to view your data…who knows, maybe AI will help in the future).

i think one property of science that is nice is that consistency in the quality of research produced over time gives some signal as to the competency of both the professor and the first authors. it indeed is hard to filter who is of actual quality, which is why science in part is so hard and we see all these replicability issues. You think the public has it hard? Scientists have it just as hard! I can’t recall the number of times I was reading a paper that seemed clean but I just couldn’t trust their conclusions because I knew the quality of their results was crap.

2 Likes

Yes, the Stanford professor issue drives home that point.

Thus even large and famous labs are not exempt from this quality issue. It’s why I preferrerd working for startup professors. There’s a lot of overlap between startup companies and startup labs. Of course, startups can be prone to scandal as well, but if you find a good one, the PI will be extremely involved, nitpicking everything, and everyone will also be extremely aware of what each person is doing. And their entire future reputation will depend on the quality of the work they produce in the early years. That is conducive to focusing on the details and not letting things slide. On the other hand, sometimes (not all), large labs have professors who don’t have time to oversee the minutaie of their research operations and can only focus on the big picture. Their postdocs effectively act as their right-hands, and their ethics will be paramount.

I digress. But I think my point is less about ethics, and more just about quality. i’m very biased towards data in which the interpretation is almost binary. Not the conclusion of the paper, but the individual datum. In theory, and I’ve witnessed this in practice, if you are extremely extremely detailed and just plain good at experimentation, your results ought to be extremely consistent. A lot of the randomness you see in papers when it comes to statistics does in fact come down to poor human execution, and not the randomness in the biology.

2 Likes

I’ll give you one specific example: Fluorescent, confocal, EM microscopy, any microscopy technique for that matter, is extremely nuanced. let’s say you see photos of a fluorescent marker in a cell. How was it imaged? Ok the methods describe: “A Nikon epifluorescent scope was used, exposed at 100ms with a Z-stack of 20 frames, 0.2um separation in between.” Ok…seems very detailed and a technique that should be reproducible by any lab member.

What about the use of a neutral density filter? Which filter did you use? If I don’t have that info, and if I don’t know exactly how many seconds you exposed the cell to fluorescent light before hitting the record button, I have no idea how much photobleaching and phototoxicity you introduced into the cells. And if you’re quantifying the fluorescence levels of the cell, which are extremely sensitive to these factors, how should I trust the numbers you give? This isn’t hypothetical because we’ve run into this exact issue with papers before.

Don’t want to belabor the point toooo much, but good labs have baseline standards for this kind of stuff. But EVEN THEN, I’ve seen that there are levels to this game. It’s really interesting, to see the top 1% of scientists and how they approach the method I’ve described vs someone just following protocols diligently. The top 1% scientists (using someone I worked with before as an example) will analyze every aspect of their instrument, think about all the seemingly irrelevant factors, and bake them into their approach. And out comes results that you wouldn’t have otherwise gotten.

3 Likes

If I had to choose between CR and Rapamycin, it’s a no-brainer that I would choose Rapamycin. CR has to be done very carefully and precisely and you’ll be miserable the entire time and in the end, you may not get much life extension. Rapamycin should give you the same or better benefits as CR while you can stay happy and sometimes even euphoric! :wink:

Rapamycin wins in my book.

2 Likes

Good post. There appear to be 2 things. One is skill and integrity, as you say taking care of the details, in carrying out the research. And the other is deliberate misrepresentation for personal benefit. I don’t think we will ever get perfect repeatability, there are too many variables to control. Even the ITP doesn’t get that. But honesty and integrity can be cleaned up - starting with the journals and peer review.
Here’s a good article on China, they are at least aware of the problem.

https://www.linkedin.com/pulse/unpacking-new-chinese-guidelines-responsible-research-rob-johnson-0doue

One of the interesting conclusions on China - Trust the results more if the lead researcher is female…they’re more honest.

And the Journal “Nature” has some good, paywalled articles. I’d love to get the full text.

Elite researchers in China say they had ‘no choice’ but to commit misconduct

https://www.nature.com/articles/d41586-024-01697-y

1 Like

lol ‘no choice’ sounds hilarious. but in many ways i understand the pressure. a lot of us have been there.

here’s another example of something most people probably would not know unless they practiced it: when you culture certain kinds of cells, for example neurons in a petri dish, the way the neurons grow over time in the medium are significantly influenced by how they were dissected. Dissection is something you can’t really just report in a methods section or tell someone how to do and then voila! It’s an art. Now, theoretically you ought to be able to design your experiments such that the variability in the growth of the neurons shouldn’t affect your results THAT much, but biology is so complex, that if you neurons aren’t even growing the same way each time you do the experiment, how can you have confidence in the downstream results?

3 Likes

Yes I think the glaring elephant in the room that the public doesn’t appreciate, even though it’s sitting in plain sight, is that rapamycin is a single variable (more variables if you’re talking about dosing, brand, adjuvants, etc). but let’s ignore those other specific variables and just compare the order of magnitude of difference in variable count between CR and rapamycin. Rapamycin you’re introducing the exact same molecule to both the organisms of study and to humans. CR, on the other hand, you’re removing completely different compounds. In mice, you’re removing whatever they’re being fed (you have to believe that you could map 1:1 what they eat to humans, but when put that way seems nuts to do so) vs removing whatever humans are being fed. The combinatoric equation to calculate out just how many variables are involved in both scenarios probably would match the order of atoms in the universe (reason being that you’re factoring in individuality with humans vs controlled cohorts with mice or whatever other organism).

1 Like

Here’s the other factor that bugs me. The caloric restriction studies are done by restricting mouse chow. Is mouse chow healthy or is it the equivalent of a mousy McDonald’s meal every day? If the latter, then of course CR will be beneficial. Would a CR diet be helpful to someone who is a healthy pescovegetarian? I’m not sure.

I remember reading a pair of marmoset CR studies done at Midwest USA zoos. The cohort of marmosets that ate an equivalent to a normal American diet benefitted from CR. The cohort of marmosets that ate a healthy vegetarian-based diet showed NO benefit from CR. Are all the CR studies just pointing to the fact that if you eat less ‘crap’ and ultra-processed foods you will live longer?

2 Likes

Apologies for this tangent but I think this is a very interesting parable about the story of the Stanford president (although it could be about David Sinclair).

Once again, let’s go back to the drama coming out of Stanford as example. You have high impact research, which could have significant potential, both in terms of the ego of the principal investigator, who I presume this was President Marc Tessier-Lavigne from Stanford. And so, the idea, the problem is that, then they want to publish in high impact journals such as science, or nature, which increases the pressure to generate results, which will verify their theory. For example, the neurological basis of Alzheimer’s disease, which would be a fantastic scientific breakthrough.

However, it becomes a spiral, and apparently, the laboratory is a pressure cooker, where those with data which support that proposition are favored, and those who do not produce data to support that proposition weren’t favored, according to the Stanford article. So it becomes a cycle, and I believe it’s really the principal investigator who has to step forward, and say, “I’m here to do this, to find out what’s going on,” instead of, “To establish what I want to be the truth.” So we find the truth, versus I want this to be the truth, therefore it is the truth.

And the sad thing is eventually somebody finds out, as we saw in this article, that it is non-verifiable data, and that’s where this house of cards peak begins to collapse.

A. J., it’s almost like a system problem, where there’s a system where superstar scientists, pressure people in the lab, who are rewarded when they find the right thing, so that the superstar scientists can publish the science in nature, and then, become somebody who discovers the neurological basis for Alzheimer’s disease, and then, they become a great person.

Unfortunately, this is going on for a long time, but an additional caveat to this is nowadays with the whole social media internet world, I believe it may be amplified to a certain extent.

A. J. Kierstead:

Yeah. I mean, the rockstar scientist that’s also getting this scientific journal article reshared by the New York Times, Washington Post, USA Today, you start hitting that end of things, I mean, it only makes you look even bigger, and it gets you those high-end, like the Stanford President role, things like that, which are very lucrative, and give you even more respect, which doesn’t necessarily shift down the pressures that are required to maintain it.

Stan Kowalski:

Yes, you’re right. And it goes to the egos, and when I did 20 years of biochemistry research, and there were people like that, I knew people like that at that time who, and I thought to myself, if you want to be famous here in the wrong business, why aren’t you in show business? That’s where you get become famous, not science, but still in all they have that mindset.

For example, I had one scientist say to me, “My dream is to have my name in lights in nature, or science.” And I thought to myself, that’s an odd way of looking at things. But then, that mindset then begins to generate this kind of atmosphere, and the pressure is from the top to bottom, and then, there’s additional pressure, because there’s funding. In other words, we have to find results to satisfy NIH, or NSF, who’s ever funding this research.

And there are shades of data, which are either unreliable, unverifiable, or they may be outright fraud.

So for example, there could be a series of experiments where there’s always a question like, this could be A, or B, but we just want it to be B, and we won’t test A again, and again, to make sure that we’re right. So there’s various shades of this kind of problem.

Thankfully, science is to a great extent, it’s an iterative process, and also, it’s competitive.

So the competition can be good, and bad. In this case, with Stanford, the competition had a bad effect, because I believe the attitude of the president was, “I’m going to beat everybody. I’m going to win.” But the competition would be good, because other scientists can say, “Well, let’s take a closer look at what you have.” That’s like a two-sided coin as well.

https://law.unh.edu/blog/2023/07/legal-impact-research-controversy-stanford-university

3 Likes

Mallapaty, S. (2024). Elite researchers in China say they had’no choice’but to commit misconduct. Nature .

Elite researchers in China say they had ‘no choice’ but to commit misconduct

Anonymous interviewees say they engaged in unethical behaviour to protect their jobs — although others say study presents an overly negative view.

By * [Smriti Mallapaty]

Interviews with staff and students at three elite Chinese universities revealed a sense of pressure to publish. Credit: Hao Qunying/Costfoto/Sipa USA via Alamy

“I had no choice but to commit [research] misconduct,” admits a researcher at an elite Chinese university. The shocking revelation is documented in a collection of several dozen anonymous, in-depth interviews offering rare, first-hand accounts of researchers who engaged in unethical behaviour — and describing what tipped them over the edge. An article based on the interviews was published in April in the journal Research Ethics 1.

The interviewer, sociologist Zhang Xinqu, and his colleague Wang Peng, a criminologist, both at the University of Hong Kong, suggest that researchers felt compelled, and even encouraged, to engage in misconduct to protect their jobs. This pressure, they conclude, ultimately came from a Chinese programme to create globally recognized universities. The programme prompted some Chinese institutions to set ambitious publishing targets, they say.

The article offers “a glimpse of the pain and guilt that researchers felt” when they engaged in unethical behaviour, says Elisabeth Bik, a scientific-image sleuth and consultant in San Francisco, California.

But other researchers say the findings paint an overly negative picture of the Chinese programme. Zheng Wenwen, who is responsible for research integrity at the Institute of Scientific and Technical Information of China, under the Ministry of Science and Technology, in Beijing, says that the sample size is too small to draw reliable conclusions. The study is based on interviews with staff at just three elite institutes — even though more than 140 institutions are now part of the programme to create internationally competitive universities and research disciplines.

Rankings a game

In 2015, the Chinese government introduced the Double First-Class Initiative to establish “world-class” universities and disciplines. Universities selected for inclusion in the programme receive extra funding, whereas those that perform poorly risk being delisted, says Wang.

Between May 2021 and April 2022, Zhang conducted anonymous virtual interviews with 30 faculty members and 5 students in the natural sciences at three of these elite universities. The interviewees included a president, deans and department heads. The researchers also analysed internal university documents.

The university decision-makers who were interviewed at all three institutes said they understood it to be their responsibility to interpret the goals of the Double First-Class scheme. They determined that, to remain on the programme, their universities needed to increase their standing in international rankings — and that, for that to happen, their researchers needed to publish more articles in international journals indexed in databases such as the Science Citation Index.

Some universities treated world university rankings as a “game” to win, says Wang.

As the directive moved down the institutional hierarchy, pressure to perform at those institutes increased. University departments set specific and hard-to-reach publishing criteria for academics to gain promotion and tenure.

Some researchers admitted to engaging in unethical research practices for fear of losing their jobs. In one interview, a faculty head said: “If anyone cannot meet the criteria [concerning publications], I suggest that they leave as soon as possible.”

Zhang and Wang describe researchers using services to write their papers for them, falsifying data, plagiarizing, exploiting students without offering authorship and bribing journal editors.

One interviewee admitted to paying for access to a data set. “I bought access to an official archive and altered the data to support my hypotheses.”

An associate dean emphasized the primacy of the publishing goal. “We should not be overly stringent in identifying and punishing research misconduct, as it hinders our scholars’ research efficiency.”

Not the whole picture

The authors “hit the nail on the head” in describing the relationship between institutional pressure and research misconduct, says Wang Fei, who studies research-integrity policy at Dalian University of Technology.

But she says it’s not the whole picture. Incentives to publish high-quality research are part of broader reforms to the higher-education system that “have been largely positive”. “The article focuses almost exclusively on the negative aspects, potentially misleading readers into thinking that Chinese higher education reforms are severely flawed and accelerating research misconduct.”

Tang Li, a science- and innovation-policy researcher at Fudan University in Shanghai, agrees. The first-hand accounts are valuable, but the findings could be biased, she says, because those who accepted the interview might have strong feelings and might not represent the opinions of those who declined to be interviewed.

Zheng disagrees with the study’s conclusions. In 2020, the government issued a directive for Double First-Class institutes. This states specifically that evaluations should be comprehensive, and not just focus on numbers of papers, she says. Research misconduct is a result not of the Double First-Class initiative, but of an “insufficient emphasis on research integrity education”, says Zheng.

Punishing misconduct

The larger problem, says Xiaotian Chen, a library and information scientist at Bradley University in Peoria, Illinois, is a lack of transparency and of systems to detect and deter misconduct in China. Most people do the right thing, despite the pressure to publish, says Chen, who has studied research misconduct in China. The pressure described in the paper could just be “an excuse to cheat”.

The Chinese government has introduced several measures to crack down on misconduct, including defining what constitutes violations and specifying appropriate penalties. They have also banned cash rewards for publishing in high-impact journals.

Wang Peng says that government policies need to be more specific about how they define and punish different types of misconduct.

But Zheng says that, compared with those that apply in other countries, “the measures currently taken by the Chinese government to punish research misconduct are already very stringent”.

The authors also ignore recent government guidance for elite Chinese institutions to break with the tendency of evaluating faculty members solely on the basis of their publications and academic titles, says Zheng.

Tang points out that the road to achieving integrity in research is long. “Cultivating research integrity takes time and requires orchestrated efforts from all stakeholders,” she says.

And the pressure to publish more papers to drive up university rankings “is not unique to China”, says Bik. “Whenever and wherever incentives and requirements are set up to make people produce more, there will be people ‘gaming the metrics’.”

References

  1. Zhang, X. & and Wang, P. Res. Ethics https://doi-org.library.smcvt.edu/10.1177/17470161241247720 (2024).
3 Likes

However, the pressure is higher in China. That’s why 50% of their research is faked. At Western universities that number is closer to 10%. Japanese universities are the most reputable.

2 Likes

All three may help with rapa induced diabetes / less optimal glucose control

But since Dr B mentioned metformin, not that it may be the only one that helps with rapa rebound effects (even if that may not have been why he mentioned it) see below*

And acarbose may be the only one that helps keep mTORC2 up while negative side effect of rapa is to knock it down

*See this rich comment

If you’re still reading (and still concerned), then consider pairing rapamycin with metformin.

Metformin inhibits mTORC1 without releasing negative feedback loops and overstimulating AKT. It stimulates AMPK by inhibiting mitochondrial complex I. AMPK then phosphorylates IRS-1 (Insulin Receptor Substrate 1), whereas rapamycin suppresses IRS-1 phosphorylation. Metformin also inhibits MEK/ERK in the presence of growth factors, while rapamycin activates MEK/ERK by releasing feedback inhibition (Rozengurt et al. 2014) 1. In male NcZ10 mice, combining rapamycin and metformin corrected for their independent downsides (Reifsnyder et al. 2022) 1. Similar results were seen with 4 weeks of combination treatment in male Balb/c mice (4–6 weeks old) (Albawardi et al. 2023).

1 Like